00:00:00.001 Started by upstream project "autotest-per-patch" build number 132514 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:01.838 The recommended git tool is: git 00:00:01.838 using credential 00000000-0000-0000-0000-000000000002 00:00:01.841 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.850 Fetching changes from the remote Git repository 00:00:01.855 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.865 Using shallow fetch with depth 1 00:00:01.865 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.865 > git --version # timeout=10 00:00:01.876 > git --version # 'git version 2.39.2' 00:00:01.876 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.886 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.886 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.370 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.381 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.394 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.394 > git config core.sparsecheckout # timeout=10 00:00:08.404 > git read-tree -mu HEAD # timeout=10 00:00:08.419 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.445 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.445 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.551 [Pipeline] Start of Pipeline 00:00:08.563 [Pipeline] library 00:00:08.564 Loading library shm_lib@master 00:00:08.564 Library shm_lib@master is cached. Copying from home. 00:00:08.577 [Pipeline] node 00:00:08.586 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.588 [Pipeline] { 00:00:08.599 [Pipeline] catchError 00:00:08.601 [Pipeline] { 00:00:08.614 [Pipeline] wrap 00:00:08.621 [Pipeline] { 00:00:08.627 [Pipeline] stage 00:00:08.628 [Pipeline] { (Prologue) 00:00:08.640 [Pipeline] echo 00:00:08.641 Node: VM-host-WFP1 00:00:08.646 [Pipeline] cleanWs 00:00:08.654 [WS-CLEANUP] Deleting project workspace... 00:00:08.654 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.660 [WS-CLEANUP] done 00:00:08.870 [Pipeline] setCustomBuildProperty 00:00:08.967 [Pipeline] httpRequest 00:00:09.326 [Pipeline] echo 00:00:09.327 Sorcerer 10.211.164.20 is alive 00:00:09.336 [Pipeline] retry 00:00:09.338 [Pipeline] { 00:00:09.349 [Pipeline] httpRequest 00:00:09.358 HttpMethod: GET 00:00:09.359 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.360 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.368 Response Code: HTTP/1.1 200 OK 00:00:09.368 Success: Status code 200 is in the accepted range: 200,404 00:00:09.369 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.000 [Pipeline] } 00:00:17.018 [Pipeline] // retry 00:00:17.026 [Pipeline] sh 00:00:17.307 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.322 [Pipeline] httpRequest 00:00:17.668 [Pipeline] echo 00:00:17.670 Sorcerer 10.211.164.20 is alive 00:00:17.680 [Pipeline] retry 00:00:17.683 [Pipeline] { 00:00:17.697 [Pipeline] httpRequest 00:00:17.703 HttpMethod: GET 00:00:17.703 URL: http://10.211.164.20/packages/spdk_d8f6e798d6e6228e43cdb5f74ee92982e9d5c1bd.tar.gz 00:00:17.704 Sending request to url: http://10.211.164.20/packages/spdk_d8f6e798d6e6228e43cdb5f74ee92982e9d5c1bd.tar.gz 00:00:17.709 Response Code: HTTP/1.1 200 OK 00:00:17.710 Success: Status code 200 is in the accepted range: 200,404 00:00:17.710 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d8f6e798d6e6228e43cdb5f74ee92982e9d5c1bd.tar.gz 00:03:31.574 [Pipeline] } 00:03:31.596 [Pipeline] // retry 00:03:31.605 [Pipeline] sh 00:03:31.885 + tar --no-same-owner -xf spdk_d8f6e798d6e6228e43cdb5f74ee92982e9d5c1bd.tar.gz 00:03:34.428 [Pipeline] sh 00:03:34.727 + git -C spdk log --oneline -n5 00:03:34.727 d8f6e798d nvme: Fix discovery loop when target has no entry 00:03:34.727 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:03:34.727 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:03:34.727 9a6847636 bdev/nvme: Fix spdk_bdev_nvme_create() 00:03:34.727 8bbc7b697 nvmf: Block ctrlr-only admin cmds if NSID is set 00:03:34.745 [Pipeline] writeFile 00:03:34.758 [Pipeline] sh 00:03:35.035 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:35.046 [Pipeline] sh 00:03:35.326 + cat autorun-spdk.conf 00:03:35.326 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:35.326 SPDK_TEST_NVME=1 00:03:35.326 SPDK_TEST_FTL=1 00:03:35.326 SPDK_TEST_ISAL=1 00:03:35.326 SPDK_RUN_ASAN=1 00:03:35.326 SPDK_RUN_UBSAN=1 00:03:35.326 SPDK_TEST_XNVME=1 00:03:35.326 SPDK_TEST_NVME_FDP=1 00:03:35.326 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:35.333 RUN_NIGHTLY=0 00:03:35.335 [Pipeline] } 00:03:35.350 [Pipeline] // stage 00:03:35.366 [Pipeline] stage 00:03:35.368 [Pipeline] { (Run VM) 00:03:35.382 [Pipeline] sh 00:03:35.663 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:35.663 + echo 'Start stage prepare_nvme.sh' 00:03:35.663 Start stage prepare_nvme.sh 00:03:35.663 + [[ -n 6 ]] 00:03:35.663 + disk_prefix=ex6 00:03:35.663 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:35.663 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:35.663 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:35.663 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:35.663 ++ SPDK_TEST_NVME=1 00:03:35.663 ++ SPDK_TEST_FTL=1 00:03:35.663 ++ SPDK_TEST_ISAL=1 00:03:35.663 ++ SPDK_RUN_ASAN=1 00:03:35.663 ++ SPDK_RUN_UBSAN=1 00:03:35.663 ++ SPDK_TEST_XNVME=1 00:03:35.663 ++ SPDK_TEST_NVME_FDP=1 00:03:35.663 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:35.663 ++ RUN_NIGHTLY=0 00:03:35.663 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:35.663 + nvme_files=() 00:03:35.663 + declare -A nvme_files 00:03:35.663 + backend_dir=/var/lib/libvirt/images/backends 00:03:35.663 + nvme_files['nvme.img']=5G 00:03:35.663 + nvme_files['nvme-cmb.img']=5G 00:03:35.663 + nvme_files['nvme-multi0.img']=4G 00:03:35.663 + nvme_files['nvme-multi1.img']=4G 00:03:35.663 + nvme_files['nvme-multi2.img']=4G 00:03:35.663 + nvme_files['nvme-openstack.img']=8G 00:03:35.663 + nvme_files['nvme-zns.img']=5G 00:03:35.663 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:35.663 + (( SPDK_TEST_FTL == 1 )) 00:03:35.663 + nvme_files["nvme-ftl.img"]=6G 00:03:35.663 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:35.663 + nvme_files["nvme-fdp.img"]=1G 00:03:35.663 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:35.663 + for nvme in "${!nvme_files[@]}" 00:03:35.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:03:35.663 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:35.663 + for nvme in "${!nvme_files[@]}" 00:03:35.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:03:35.922 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:35.922 + for nvme in "${!nvme_files[@]}" 00:03:35.922 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:03:35.922 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:35.922 + for nvme in "${!nvme_files[@]}" 00:03:35.922 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:03:35.922 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:35.922 + for nvme in "${!nvme_files[@]}" 00:03:35.922 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:03:35.922 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:35.922 + for nvme in "${!nvme_files[@]}" 00:03:35.922 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:03:36.182 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:36.182 + for nvme in "${!nvme_files[@]}" 00:03:36.182 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:03:36.440 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:36.440 + for nvme in "${!nvme_files[@]}" 00:03:36.440 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:03:36.440 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:36.440 + for nvme in "${!nvme_files[@]}" 00:03:36.440 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:03:36.697 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:36.697 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:03:36.697 + echo 'End stage prepare_nvme.sh' 00:03:36.697 End stage prepare_nvme.sh 00:03:36.709 [Pipeline] sh 00:03:36.989 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:36.989 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:36.989 00:03:36.989 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:36.989 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:36.989 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:36.989 HELP=0 00:03:36.989 DRY_RUN=0 00:03:36.989 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:03:36.989 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:36.989 NVME_AUTO_CREATE=0 00:03:36.989 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:03:36.989 NVME_CMB=,,,, 00:03:36.989 NVME_PMR=,,,, 00:03:36.989 NVME_ZNS=,,,, 00:03:36.989 NVME_MS=true,,,, 00:03:36.989 NVME_FDP=,,,on, 00:03:36.989 SPDK_VAGRANT_DISTRO=fedora39 00:03:36.989 SPDK_VAGRANT_VMCPU=10 00:03:36.989 SPDK_VAGRANT_VMRAM=12288 00:03:36.989 SPDK_VAGRANT_PROVIDER=libvirt 00:03:36.989 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:36.989 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:36.989 SPDK_OPENSTACK_NETWORK=0 00:03:36.989 VAGRANT_PACKAGE_BOX=0 00:03:36.989 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:36.989 FORCE_DISTRO=true 00:03:36.989 VAGRANT_BOX_VERSION= 00:03:36.989 EXTRA_VAGRANTFILES= 00:03:36.989 NIC_MODEL=e1000 00:03:36.989 00:03:36.989 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:03:36.989 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:40.274 Bringing machine 'default' up with 'libvirt' provider... 00:03:41.649 ==> default: Creating image (snapshot of base box volume). 00:03:41.649 ==> default: Creating domain with the following settings... 00:03:41.649 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732566108_24e719484bacd4e4af17 00:03:41.649 ==> default: -- Domain type: kvm 00:03:41.649 ==> default: -- Cpus: 10 00:03:41.649 ==> default: -- Feature: acpi 00:03:41.649 ==> default: -- Feature: apic 00:03:41.649 ==> default: -- Feature: pae 00:03:41.649 ==> default: -- Memory: 12288M 00:03:41.649 ==> default: -- Memory Backing: hugepages: 00:03:41.649 ==> default: -- Management MAC: 00:03:41.649 ==> default: -- Loader: 00:03:41.649 ==> default: -- Nvram: 00:03:41.649 ==> default: -- Base box: spdk/fedora39 00:03:41.649 ==> default: -- Storage pool: default 00:03:41.649 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732566108_24e719484bacd4e4af17.img (20G) 00:03:41.649 ==> default: -- Volume Cache: default 00:03:41.649 ==> default: -- Kernel: 00:03:41.649 ==> default: -- Initrd: 00:03:41.649 ==> default: -- Graphics Type: vnc 00:03:41.649 ==> default: -- Graphics Port: -1 00:03:41.649 ==> default: -- Graphics IP: 127.0.0.1 00:03:41.649 ==> default: -- Graphics Password: Not defined 00:03:41.649 ==> default: -- Video Type: cirrus 00:03:41.649 ==> default: -- Video VRAM: 9216 00:03:41.649 ==> default: -- Sound Type: 00:03:41.649 ==> default: -- Keymap: en-us 00:03:41.649 ==> default: -- TPM Path: 00:03:41.649 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:41.649 ==> default: -- Command line args: 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:41.649 ==> default: -> value=-drive, 00:03:41.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:41.649 ==> default: -> value=-drive, 00:03:41.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:41.649 ==> default: -> value=-drive, 00:03:41.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:41.649 ==> default: -> value=-drive, 00:03:41.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:41.649 ==> default: -> value=-drive, 00:03:41.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:41.649 ==> default: -> value=-drive, 00:03:41.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:41.649 ==> default: -> value=-device, 00:03:41.649 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:42.215 ==> default: Creating shared folders metadata... 00:03:42.215 ==> default: Starting domain. 00:03:45.572 ==> default: Waiting for domain to get an IP address... 00:04:03.698 ==> default: Waiting for SSH to become available... 00:04:03.698 ==> default: Configuring and enabling network interfaces... 00:04:08.973 default: SSH address: 192.168.121.51:22 00:04:08.973 default: SSH username: vagrant 00:04:08.973 default: SSH auth method: private key 00:04:11.508 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:21.490 ==> default: Mounting SSHFS shared folder... 00:04:22.869 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:22.869 ==> default: Checking Mount.. 00:04:24.772 ==> default: Folder Successfully Mounted! 00:04:24.772 ==> default: Running provisioner: file... 00:04:25.706 default: ~/.gitconfig => .gitconfig 00:04:26.281 00:04:26.281 SUCCESS! 00:04:26.281 00:04:26.281 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:26.281 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:26.281 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:26.281 00:04:26.290 [Pipeline] } 00:04:26.306 [Pipeline] // stage 00:04:26.316 [Pipeline] dir 00:04:26.317 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:04:26.318 [Pipeline] { 00:04:26.331 [Pipeline] catchError 00:04:26.333 [Pipeline] { 00:04:26.346 [Pipeline] sh 00:04:26.635 + vagrant ssh-config --host vagrant 00:04:26.635 + tee ssh_conf 00:04:26.635 + sed -ne /^Host/,$p 00:04:29.938 Host vagrant 00:04:29.938 HostName 192.168.121.51 00:04:29.938 User vagrant 00:04:29.938 Port 22 00:04:29.938 UserKnownHostsFile /dev/null 00:04:29.938 StrictHostKeyChecking no 00:04:29.938 PasswordAuthentication no 00:04:29.938 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:29.938 IdentitiesOnly yes 00:04:29.938 LogLevel FATAL 00:04:29.938 ForwardAgent yes 00:04:29.938 ForwardX11 yes 00:04:29.938 00:04:29.953 [Pipeline] withEnv 00:04:29.956 [Pipeline] { 00:04:29.970 [Pipeline] sh 00:04:30.255 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:30.255 source /etc/os-release 00:04:30.255 [[ -e /image.version ]] && img=$(< /image.version) 00:04:30.255 # Minimal, systemd-like check. 00:04:30.255 if [[ -e /.dockerenv ]]; then 00:04:30.255 # Clear garbage from the node's name: 00:04:30.255 # agt-er_autotest_547-896 -> autotest_547-896 00:04:30.255 # $HOSTNAME is the actual container id 00:04:30.255 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:30.255 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:30.255 # We can assume this is a mount from a host where container is running, 00:04:30.255 # so fetch its hostname to easily identify the target swarm worker. 00:04:30.255 container="$(< /etc/hostname) ($agent)" 00:04:30.255 else 00:04:30.255 # Fallback 00:04:30.255 container=$agent 00:04:30.255 fi 00:04:30.255 fi 00:04:30.255 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:30.255 00:04:30.526 [Pipeline] } 00:04:30.543 [Pipeline] // withEnv 00:04:30.552 [Pipeline] setCustomBuildProperty 00:04:30.568 [Pipeline] stage 00:04:30.570 [Pipeline] { (Tests) 00:04:30.587 [Pipeline] sh 00:04:30.870 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:31.143 [Pipeline] sh 00:04:31.425 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:31.699 [Pipeline] timeout 00:04:31.700 Timeout set to expire in 50 min 00:04:31.701 [Pipeline] { 00:04:31.715 [Pipeline] sh 00:04:31.997 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:32.566 HEAD is now at d8f6e798d nvme: Fix discovery loop when target has no entry 00:04:32.578 [Pipeline] sh 00:04:32.864 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:33.141 [Pipeline] sh 00:04:33.419 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:33.694 [Pipeline] sh 00:04:33.974 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:34.232 ++ readlink -f spdk_repo 00:04:34.232 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:34.232 + [[ -n /home/vagrant/spdk_repo ]] 00:04:34.232 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:34.232 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:34.232 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:34.232 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:34.232 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:34.232 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:34.232 + cd /home/vagrant/spdk_repo 00:04:34.232 + source /etc/os-release 00:04:34.232 ++ NAME='Fedora Linux' 00:04:34.232 ++ VERSION='39 (Cloud Edition)' 00:04:34.232 ++ ID=fedora 00:04:34.232 ++ VERSION_ID=39 00:04:34.232 ++ VERSION_CODENAME= 00:04:34.232 ++ PLATFORM_ID=platform:f39 00:04:34.232 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:34.232 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:34.232 ++ LOGO=fedora-logo-icon 00:04:34.232 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:34.232 ++ HOME_URL=https://fedoraproject.org/ 00:04:34.232 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:34.232 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:34.232 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:34.232 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:34.232 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:34.232 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:34.232 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:34.232 ++ SUPPORT_END=2024-11-12 00:04:34.232 ++ VARIANT='Cloud Edition' 00:04:34.232 ++ VARIANT_ID=cloud 00:04:34.232 + uname -a 00:04:34.233 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:34.233 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.061 Hugepages 00:04:35.061 node hugesize free / total 00:04:35.061 node0 1048576kB 0 / 0 00:04:35.061 node0 2048kB 0 / 0 00:04:35.061 00:04:35.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.061 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:35.061 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:35.061 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:35.061 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:35.061 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:35.320 + rm -f /tmp/spdk-ld-path 00:04:35.320 + source autorun-spdk.conf 00:04:35.320 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:35.320 ++ SPDK_TEST_NVME=1 00:04:35.320 ++ SPDK_TEST_FTL=1 00:04:35.320 ++ SPDK_TEST_ISAL=1 00:04:35.320 ++ SPDK_RUN_ASAN=1 00:04:35.320 ++ SPDK_RUN_UBSAN=1 00:04:35.320 ++ SPDK_TEST_XNVME=1 00:04:35.321 ++ SPDK_TEST_NVME_FDP=1 00:04:35.321 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:35.321 ++ RUN_NIGHTLY=0 00:04:35.321 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:35.321 + [[ -n '' ]] 00:04:35.321 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:35.321 + for M in /var/spdk/build-*-manifest.txt 00:04:35.321 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:35.321 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:35.321 + for M in /var/spdk/build-*-manifest.txt 00:04:35.321 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:35.321 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:35.321 + for M in /var/spdk/build-*-manifest.txt 00:04:35.321 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:35.321 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:35.321 ++ uname 00:04:35.321 + [[ Linux == \L\i\n\u\x ]] 00:04:35.321 + sudo dmesg -T 00:04:35.321 + sudo dmesg --clear 00:04:35.321 + dmesg_pid=5247 00:04:35.321 + sudo dmesg -Tw 00:04:35.321 + [[ Fedora Linux == FreeBSD ]] 00:04:35.321 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:35.321 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:35.321 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:35.321 + [[ -x /usr/src/fio-static/fio ]] 00:04:35.321 + export FIO_BIN=/usr/src/fio-static/fio 00:04:35.321 + FIO_BIN=/usr/src/fio-static/fio 00:04:35.321 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:35.321 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:35.321 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:35.321 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:35.321 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:35.321 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:35.321 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:35.321 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:35.321 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.580 20:22:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:35.580 20:22:43 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:35.580 20:22:43 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:35.580 20:22:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:35.580 20:22:43 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.580 20:22:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:35.580 20:22:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.580 20:22:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:35.580 20:22:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:35.580 20:22:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.580 20:22:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.580 20:22:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.580 20:22:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.580 20:22:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.580 20:22:43 -- paths/export.sh@5 -- $ export PATH 00:04:35.580 20:22:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.580 20:22:43 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:35.580 20:22:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:35.580 20:22:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732566163.XXXXXX 00:04:35.580 20:22:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732566163.zMaowP 00:04:35.580 20:22:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:35.580 20:22:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:35.580 20:22:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:35.580 20:22:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:35.580 20:22:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:35.580 20:22:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:35.580 20:22:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:35.580 20:22:43 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.580 20:22:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:35.580 20:22:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:35.580 20:22:43 -- pm/common@17 -- $ local monitor 00:04:35.580 20:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.580 20:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.580 20:22:43 -- pm/common@25 -- $ sleep 1 00:04:35.580 20:22:43 -- pm/common@21 -- $ date +%s 00:04:35.580 20:22:43 -- pm/common@21 -- $ date +%s 00:04:35.580 20:22:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732566163 00:04:35.580 20:22:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732566163 00:04:35.839 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732566163_collect-vmstat.pm.log 00:04:35.839 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732566163_collect-cpu-load.pm.log 00:04:36.774 20:22:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:36.774 20:22:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:36.774 20:22:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:36.774 20:22:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:36.774 20:22:44 -- spdk/autobuild.sh@16 -- $ date -u 00:04:36.774 Mon Nov 25 08:22:44 PM UTC 2024 00:04:36.774 20:22:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:36.774 v25.01-pre-237-gd8f6e798d 00:04:36.774 20:22:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:36.774 20:22:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:36.774 20:22:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:36.774 20:22:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:36.774 20:22:44 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.774 ************************************ 00:04:36.774 START TEST asan 00:04:36.774 ************************************ 00:04:36.774 using asan 00:04:36.774 20:22:44 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:36.774 00:04:36.774 real 0m0.001s 00:04:36.774 user 0m0.001s 00:04:36.774 sys 0m0.000s 00:04:36.774 20:22:44 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:36.774 ************************************ 00:04:36.774 END TEST asan 00:04:36.774 ************************************ 00:04:36.774 20:22:44 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:36.774 20:22:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:36.774 20:22:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:36.774 20:22:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:36.774 20:22:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:36.774 20:22:44 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.774 ************************************ 00:04:36.774 START TEST ubsan 00:04:36.774 ************************************ 00:04:36.774 using ubsan 00:04:36.774 20:22:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:36.774 00:04:36.774 real 0m0.001s 00:04:36.774 user 0m0.000s 00:04:36.774 sys 0m0.000s 00:04:36.774 20:22:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:36.774 20:22:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:36.774 ************************************ 00:04:36.774 END TEST ubsan 00:04:36.774 ************************************ 00:04:36.774 20:22:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:36.774 20:22:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:36.774 20:22:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:36.774 20:22:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:36.774 20:22:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:36.774 20:22:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:36.774 20:22:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:36.774 20:22:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:36.774 20:22:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:37.032 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:37.032 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:37.598 Using 'verbs' RDMA provider 00:04:57.150 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:12.032 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:12.032 Creating mk/config.mk...done. 00:05:12.032 Creating mk/cc.flags.mk...done. 00:05:12.032 Type 'make' to build. 00:05:12.032 20:23:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:12.032 20:23:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:12.032 20:23:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:12.032 20:23:19 -- common/autotest_common.sh@10 -- $ set +x 00:05:12.032 ************************************ 00:05:12.032 START TEST make 00:05:12.032 ************************************ 00:05:12.032 20:23:19 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:12.032 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:05:12.032 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:05:12.032 meson setup builddir \ 00:05:12.032 -Dwith-libaio=enabled \ 00:05:12.032 -Dwith-liburing=enabled \ 00:05:12.032 -Dwith-libvfn=disabled \ 00:05:12.032 -Dwith-spdk=disabled \ 00:05:12.032 -Dexamples=false \ 00:05:12.032 -Dtests=false \ 00:05:12.032 -Dtools=false && \ 00:05:12.032 meson compile -C builddir && \ 00:05:12.032 cd -) 00:05:12.032 make[1]: Nothing to be done for 'all'. 00:05:15.319 The Meson build system 00:05:15.319 Version: 1.5.0 00:05:15.319 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:05:15.319 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:15.319 Build type: native build 00:05:15.319 Project name: xnvme 00:05:15.319 Project version: 0.7.5 00:05:15.319 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:15.319 C linker for the host machine: cc ld.bfd 2.40-14 00:05:15.319 Host machine cpu family: x86_64 00:05:15.319 Host machine cpu: x86_64 00:05:15.319 Message: host_machine.system: linux 00:05:15.319 Compiler for C supports arguments -Wno-missing-braces: YES 00:05:15.319 Compiler for C supports arguments -Wno-cast-function-type: YES 00:05:15.319 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:15.319 Run-time dependency threads found: YES 00:05:15.319 Has header "setupapi.h" : NO 00:05:15.319 Has header "linux/blkzoned.h" : YES 00:05:15.319 Has header "linux/blkzoned.h" : YES (cached) 00:05:15.319 Has header "libaio.h" : YES 00:05:15.319 Library aio found: YES 00:05:15.319 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:15.319 Run-time dependency liburing found: YES 2.2 00:05:15.319 Dependency libvfn skipped: feature with-libvfn disabled 00:05:15.319 Found CMake: /usr/bin/cmake (3.27.7) 00:05:15.319 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:05:15.319 Subproject spdk : skipped: feature with-spdk disabled 00:05:15.319 Run-time dependency appleframeworks found: NO (tried framework) 00:05:15.319 Run-time dependency appleframeworks found: NO (tried framework) 00:05:15.319 Library rt found: YES 00:05:15.319 Checking for function "clock_gettime" with dependency -lrt: YES 00:05:15.319 Configuring xnvme_config.h using configuration 00:05:15.319 Configuring xnvme.spec using configuration 00:05:15.319 Run-time dependency bash-completion found: YES 2.11 00:05:15.319 Message: Bash-completions: /usr/share/bash-completion/completions 00:05:15.319 Program cp found: YES (/usr/bin/cp) 00:05:15.319 Build targets in project: 3 00:05:15.319 00:05:15.319 xnvme 0.7.5 00:05:15.319 00:05:15.319 Subprojects 00:05:15.319 spdk : NO Feature 'with-spdk' disabled 00:05:15.319 00:05:15.319 User defined options 00:05:15.319 examples : false 00:05:15.319 tests : false 00:05:15.319 tools : false 00:05:15.319 with-libaio : enabled 00:05:15.319 with-liburing: enabled 00:05:15.319 with-libvfn : disabled 00:05:15.319 with-spdk : disabled 00:05:15.319 00:05:15.319 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:15.974 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:05:15.974 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:05:15.974 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:05:15.974 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:05:16.233 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:05:16.234 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:05:16.234 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:05:16.234 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:05:16.234 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:05:16.234 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:05:16.234 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:05:16.234 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:05:16.234 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:05:16.234 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:05:16.234 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:05:16.234 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:05:16.234 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:05:16.234 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:05:16.234 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:05:16.234 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:05:16.493 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:05:16.493 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:05:16.493 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:05:16.493 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:05:16.493 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:05:16.493 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:05:16.493 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:05:16.493 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:05:16.493 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:05:16.493 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:05:16.493 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:05:16.493 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:05:16.493 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:05:16.493 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:05:16.493 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:05:16.493 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:05:16.493 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:05:16.493 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:05:16.493 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:05:16.751 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:05:16.751 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:05:16.751 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:05:16.751 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:05:16.752 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:05:16.752 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:05:16.752 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:05:16.752 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:05:16.752 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:05:16.752 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:05:16.752 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:05:16.752 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:05:16.752 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:05:16.752 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:05:16.752 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:05:16.752 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:05:16.752 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:05:16.752 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:05:17.010 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:05:17.010 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:05:17.010 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:05:17.010 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:05:17.010 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:05:17.010 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:05:17.010 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:05:17.010 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:05:17.010 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:05:17.010 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:05:17.010 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:05:17.010 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:05:17.010 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:05:17.010 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:05:17.268 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:05:17.268 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:05:17.268 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:05:17.528 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:05:17.528 [75/76] Linking static target lib/libxnvme.a 00:05:17.528 [76/76] Linking target lib/libxnvme.so.0.7.5 00:05:17.528 INFO: autodetecting backend as ninja 00:05:17.528 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:17.788 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:27.759 The Meson build system 00:05:27.759 Version: 1.5.0 00:05:27.759 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:27.759 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:27.759 Build type: native build 00:05:27.759 Program cat found: YES (/usr/bin/cat) 00:05:27.759 Project name: DPDK 00:05:27.759 Project version: 24.03.0 00:05:27.759 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:27.759 C linker for the host machine: cc ld.bfd 2.40-14 00:05:27.759 Host machine cpu family: x86_64 00:05:27.759 Host machine cpu: x86_64 00:05:27.759 Message: ## Building in Developer Mode ## 00:05:27.759 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:27.759 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:27.759 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:27.759 Program python3 found: YES (/usr/bin/python3) 00:05:27.759 Program cat found: YES (/usr/bin/cat) 00:05:27.759 Compiler for C supports arguments -march=native: YES 00:05:27.759 Checking for size of "void *" : 8 00:05:27.759 Checking for size of "void *" : 8 (cached) 00:05:27.759 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:27.759 Library m found: YES 00:05:27.759 Library numa found: YES 00:05:27.759 Has header "numaif.h" : YES 00:05:27.759 Library fdt found: NO 00:05:27.759 Library execinfo found: NO 00:05:27.759 Has header "execinfo.h" : YES 00:05:27.759 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:27.759 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:27.759 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:27.759 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:27.759 Run-time dependency openssl found: YES 3.1.1 00:05:27.759 Run-time dependency libpcap found: YES 1.10.4 00:05:27.760 Has header "pcap.h" with dependency libpcap: YES 00:05:27.760 Compiler for C supports arguments -Wcast-qual: YES 00:05:27.760 Compiler for C supports arguments -Wdeprecated: YES 00:05:27.760 Compiler for C supports arguments -Wformat: YES 00:05:27.760 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:27.760 Compiler for C supports arguments -Wformat-security: NO 00:05:27.760 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:27.760 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:27.760 Compiler for C supports arguments -Wnested-externs: YES 00:05:27.760 Compiler for C supports arguments -Wold-style-definition: YES 00:05:27.760 Compiler for C supports arguments -Wpointer-arith: YES 00:05:27.760 Compiler for C supports arguments -Wsign-compare: YES 00:05:27.760 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:27.760 Compiler for C supports arguments -Wundef: YES 00:05:27.760 Compiler for C supports arguments -Wwrite-strings: YES 00:05:27.760 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:27.760 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:27.760 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:27.760 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:27.760 Program objdump found: YES (/usr/bin/objdump) 00:05:27.760 Compiler for C supports arguments -mavx512f: YES 00:05:27.760 Checking if "AVX512 checking" compiles: YES 00:05:27.760 Fetching value of define "__SSE4_2__" : 1 00:05:27.760 Fetching value of define "__AES__" : 1 00:05:27.760 Fetching value of define "__AVX__" : 1 00:05:27.760 Fetching value of define "__AVX2__" : 1 00:05:27.760 Fetching value of define "__AVX512BW__" : 1 00:05:27.760 Fetching value of define "__AVX512CD__" : 1 00:05:27.760 Fetching value of define "__AVX512DQ__" : 1 00:05:27.760 Fetching value of define "__AVX512F__" : 1 00:05:27.760 Fetching value of define "__AVX512VL__" : 1 00:05:27.760 Fetching value of define "__PCLMUL__" : 1 00:05:27.760 Fetching value of define "__RDRND__" : 1 00:05:27.760 Fetching value of define "__RDSEED__" : 1 00:05:27.760 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:27.760 Fetching value of define "__znver1__" : (undefined) 00:05:27.760 Fetching value of define "__znver2__" : (undefined) 00:05:27.760 Fetching value of define "__znver3__" : (undefined) 00:05:27.760 Fetching value of define "__znver4__" : (undefined) 00:05:27.760 Library asan found: YES 00:05:27.760 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:27.760 Message: lib/log: Defining dependency "log" 00:05:27.760 Message: lib/kvargs: Defining dependency "kvargs" 00:05:27.760 Message: lib/telemetry: Defining dependency "telemetry" 00:05:27.760 Library rt found: YES 00:05:27.760 Checking for function "getentropy" : NO 00:05:27.760 Message: lib/eal: Defining dependency "eal" 00:05:27.760 Message: lib/ring: Defining dependency "ring" 00:05:27.760 Message: lib/rcu: Defining dependency "rcu" 00:05:27.760 Message: lib/mempool: Defining dependency "mempool" 00:05:27.760 Message: lib/mbuf: Defining dependency "mbuf" 00:05:27.760 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:27.760 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:27.760 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:27.760 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:27.760 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:27.760 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:27.760 Compiler for C supports arguments -mpclmul: YES 00:05:27.760 Compiler for C supports arguments -maes: YES 00:05:27.760 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:27.760 Compiler for C supports arguments -mavx512bw: YES 00:05:27.760 Compiler for C supports arguments -mavx512dq: YES 00:05:27.760 Compiler for C supports arguments -mavx512vl: YES 00:05:27.760 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:27.760 Compiler for C supports arguments -mavx2: YES 00:05:27.760 Compiler for C supports arguments -mavx: YES 00:05:27.760 Message: lib/net: Defining dependency "net" 00:05:27.760 Message: lib/meter: Defining dependency "meter" 00:05:27.760 Message: lib/ethdev: Defining dependency "ethdev" 00:05:27.760 Message: lib/pci: Defining dependency "pci" 00:05:27.760 Message: lib/cmdline: Defining dependency "cmdline" 00:05:27.760 Message: lib/hash: Defining dependency "hash" 00:05:27.760 Message: lib/timer: Defining dependency "timer" 00:05:27.760 Message: lib/compressdev: Defining dependency "compressdev" 00:05:27.760 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:27.760 Message: lib/dmadev: Defining dependency "dmadev" 00:05:27.760 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:27.760 Message: lib/power: Defining dependency "power" 00:05:27.760 Message: lib/reorder: Defining dependency "reorder" 00:05:27.760 Message: lib/security: Defining dependency "security" 00:05:27.760 Has header "linux/userfaultfd.h" : YES 00:05:27.760 Has header "linux/vduse.h" : YES 00:05:27.760 Message: lib/vhost: Defining dependency "vhost" 00:05:27.760 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:27.760 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:27.760 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:27.760 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:27.760 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:27.760 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:27.760 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:27.760 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:27.760 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:27.760 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:27.760 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:27.760 Configuring doxy-api-html.conf using configuration 00:05:27.760 Configuring doxy-api-man.conf using configuration 00:05:27.760 Program mandb found: YES (/usr/bin/mandb) 00:05:27.760 Program sphinx-build found: NO 00:05:27.760 Configuring rte_build_config.h using configuration 00:05:27.760 Message: 00:05:27.760 ================= 00:05:27.760 Applications Enabled 00:05:27.760 ================= 00:05:27.760 00:05:27.760 apps: 00:05:27.760 00:05:27.760 00:05:27.760 Message: 00:05:27.760 ================= 00:05:27.760 Libraries Enabled 00:05:27.760 ================= 00:05:27.760 00:05:27.760 libs: 00:05:27.760 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:27.760 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:27.760 cryptodev, dmadev, power, reorder, security, vhost, 00:05:27.760 00:05:27.760 Message: 00:05:27.760 =============== 00:05:27.760 Drivers Enabled 00:05:27.760 =============== 00:05:27.760 00:05:27.760 common: 00:05:27.760 00:05:27.760 bus: 00:05:27.760 pci, vdev, 00:05:27.760 mempool: 00:05:27.760 ring, 00:05:27.760 dma: 00:05:27.760 00:05:27.760 net: 00:05:27.760 00:05:27.760 crypto: 00:05:27.760 00:05:27.760 compress: 00:05:27.760 00:05:27.760 vdpa: 00:05:27.760 00:05:27.760 00:05:27.760 Message: 00:05:27.760 ================= 00:05:27.760 Content Skipped 00:05:27.760 ================= 00:05:27.760 00:05:27.760 apps: 00:05:27.760 dumpcap: explicitly disabled via build config 00:05:27.760 graph: explicitly disabled via build config 00:05:27.760 pdump: explicitly disabled via build config 00:05:27.760 proc-info: explicitly disabled via build config 00:05:27.760 test-acl: explicitly disabled via build config 00:05:27.760 test-bbdev: explicitly disabled via build config 00:05:27.760 test-cmdline: explicitly disabled via build config 00:05:27.760 test-compress-perf: explicitly disabled via build config 00:05:27.760 test-crypto-perf: explicitly disabled via build config 00:05:27.760 test-dma-perf: explicitly disabled via build config 00:05:27.760 test-eventdev: explicitly disabled via build config 00:05:27.760 test-fib: explicitly disabled via build config 00:05:27.760 test-flow-perf: explicitly disabled via build config 00:05:27.760 test-gpudev: explicitly disabled via build config 00:05:27.760 test-mldev: explicitly disabled via build config 00:05:27.760 test-pipeline: explicitly disabled via build config 00:05:27.760 test-pmd: explicitly disabled via build config 00:05:27.760 test-regex: explicitly disabled via build config 00:05:27.760 test-sad: explicitly disabled via build config 00:05:27.760 test-security-perf: explicitly disabled via build config 00:05:27.760 00:05:27.760 libs: 00:05:27.760 argparse: explicitly disabled via build config 00:05:27.760 metrics: explicitly disabled via build config 00:05:27.760 acl: explicitly disabled via build config 00:05:27.760 bbdev: explicitly disabled via build config 00:05:27.760 bitratestats: explicitly disabled via build config 00:05:27.760 bpf: explicitly disabled via build config 00:05:27.760 cfgfile: explicitly disabled via build config 00:05:27.760 distributor: explicitly disabled via build config 00:05:27.760 efd: explicitly disabled via build config 00:05:27.760 eventdev: explicitly disabled via build config 00:05:27.760 dispatcher: explicitly disabled via build config 00:05:27.760 gpudev: explicitly disabled via build config 00:05:27.760 gro: explicitly disabled via build config 00:05:27.761 gso: explicitly disabled via build config 00:05:27.761 ip_frag: explicitly disabled via build config 00:05:27.761 jobstats: explicitly disabled via build config 00:05:27.761 latencystats: explicitly disabled via build config 00:05:27.761 lpm: explicitly disabled via build config 00:05:27.761 member: explicitly disabled via build config 00:05:27.761 pcapng: explicitly disabled via build config 00:05:27.761 rawdev: explicitly disabled via build config 00:05:27.761 regexdev: explicitly disabled via build config 00:05:27.761 mldev: explicitly disabled via build config 00:05:27.761 rib: explicitly disabled via build config 00:05:27.761 sched: explicitly disabled via build config 00:05:27.761 stack: explicitly disabled via build config 00:05:27.761 ipsec: explicitly disabled via build config 00:05:27.761 pdcp: explicitly disabled via build config 00:05:27.761 fib: explicitly disabled via build config 00:05:27.761 port: explicitly disabled via build config 00:05:27.761 pdump: explicitly disabled via build config 00:05:27.761 table: explicitly disabled via build config 00:05:27.761 pipeline: explicitly disabled via build config 00:05:27.761 graph: explicitly disabled via build config 00:05:27.761 node: explicitly disabled via build config 00:05:27.761 00:05:27.761 drivers: 00:05:27.761 common/cpt: not in enabled drivers build config 00:05:27.761 common/dpaax: not in enabled drivers build config 00:05:27.761 common/iavf: not in enabled drivers build config 00:05:27.761 common/idpf: not in enabled drivers build config 00:05:27.761 common/ionic: not in enabled drivers build config 00:05:27.761 common/mvep: not in enabled drivers build config 00:05:27.761 common/octeontx: not in enabled drivers build config 00:05:27.761 bus/auxiliary: not in enabled drivers build config 00:05:27.761 bus/cdx: not in enabled drivers build config 00:05:27.761 bus/dpaa: not in enabled drivers build config 00:05:27.761 bus/fslmc: not in enabled drivers build config 00:05:27.761 bus/ifpga: not in enabled drivers build config 00:05:27.761 bus/platform: not in enabled drivers build config 00:05:27.761 bus/uacce: not in enabled drivers build config 00:05:27.761 bus/vmbus: not in enabled drivers build config 00:05:27.761 common/cnxk: not in enabled drivers build config 00:05:27.761 common/mlx5: not in enabled drivers build config 00:05:27.761 common/nfp: not in enabled drivers build config 00:05:27.761 common/nitrox: not in enabled drivers build config 00:05:27.761 common/qat: not in enabled drivers build config 00:05:27.761 common/sfc_efx: not in enabled drivers build config 00:05:27.761 mempool/bucket: not in enabled drivers build config 00:05:27.761 mempool/cnxk: not in enabled drivers build config 00:05:27.761 mempool/dpaa: not in enabled drivers build config 00:05:27.761 mempool/dpaa2: not in enabled drivers build config 00:05:27.761 mempool/octeontx: not in enabled drivers build config 00:05:27.761 mempool/stack: not in enabled drivers build config 00:05:27.761 dma/cnxk: not in enabled drivers build config 00:05:27.761 dma/dpaa: not in enabled drivers build config 00:05:27.761 dma/dpaa2: not in enabled drivers build config 00:05:27.761 dma/hisilicon: not in enabled drivers build config 00:05:27.761 dma/idxd: not in enabled drivers build config 00:05:27.761 dma/ioat: not in enabled drivers build config 00:05:27.761 dma/skeleton: not in enabled drivers build config 00:05:27.761 net/af_packet: not in enabled drivers build config 00:05:27.761 net/af_xdp: not in enabled drivers build config 00:05:27.761 net/ark: not in enabled drivers build config 00:05:27.761 net/atlantic: not in enabled drivers build config 00:05:27.761 net/avp: not in enabled drivers build config 00:05:27.761 net/axgbe: not in enabled drivers build config 00:05:27.761 net/bnx2x: not in enabled drivers build config 00:05:27.761 net/bnxt: not in enabled drivers build config 00:05:27.761 net/bonding: not in enabled drivers build config 00:05:27.761 net/cnxk: not in enabled drivers build config 00:05:27.761 net/cpfl: not in enabled drivers build config 00:05:27.761 net/cxgbe: not in enabled drivers build config 00:05:27.761 net/dpaa: not in enabled drivers build config 00:05:27.761 net/dpaa2: not in enabled drivers build config 00:05:27.761 net/e1000: not in enabled drivers build config 00:05:27.761 net/ena: not in enabled drivers build config 00:05:27.761 net/enetc: not in enabled drivers build config 00:05:27.761 net/enetfec: not in enabled drivers build config 00:05:27.761 net/enic: not in enabled drivers build config 00:05:27.761 net/failsafe: not in enabled drivers build config 00:05:27.761 net/fm10k: not in enabled drivers build config 00:05:27.761 net/gve: not in enabled drivers build config 00:05:27.761 net/hinic: not in enabled drivers build config 00:05:27.761 net/hns3: not in enabled drivers build config 00:05:27.761 net/i40e: not in enabled drivers build config 00:05:27.761 net/iavf: not in enabled drivers build config 00:05:27.761 net/ice: not in enabled drivers build config 00:05:27.761 net/idpf: not in enabled drivers build config 00:05:27.761 net/igc: not in enabled drivers build config 00:05:27.761 net/ionic: not in enabled drivers build config 00:05:27.761 net/ipn3ke: not in enabled drivers build config 00:05:27.761 net/ixgbe: not in enabled drivers build config 00:05:27.761 net/mana: not in enabled drivers build config 00:05:27.761 net/memif: not in enabled drivers build config 00:05:27.761 net/mlx4: not in enabled drivers build config 00:05:27.761 net/mlx5: not in enabled drivers build config 00:05:27.761 net/mvneta: not in enabled drivers build config 00:05:27.761 net/mvpp2: not in enabled drivers build config 00:05:27.761 net/netvsc: not in enabled drivers build config 00:05:27.761 net/nfb: not in enabled drivers build config 00:05:27.761 net/nfp: not in enabled drivers build config 00:05:27.761 net/ngbe: not in enabled drivers build config 00:05:27.761 net/null: not in enabled drivers build config 00:05:27.761 net/octeontx: not in enabled drivers build config 00:05:27.761 net/octeon_ep: not in enabled drivers build config 00:05:27.761 net/pcap: not in enabled drivers build config 00:05:27.761 net/pfe: not in enabled drivers build config 00:05:27.761 net/qede: not in enabled drivers build config 00:05:27.761 net/ring: not in enabled drivers build config 00:05:27.761 net/sfc: not in enabled drivers build config 00:05:27.761 net/softnic: not in enabled drivers build config 00:05:27.761 net/tap: not in enabled drivers build config 00:05:27.761 net/thunderx: not in enabled drivers build config 00:05:27.761 net/txgbe: not in enabled drivers build config 00:05:27.761 net/vdev_netvsc: not in enabled drivers build config 00:05:27.761 net/vhost: not in enabled drivers build config 00:05:27.761 net/virtio: not in enabled drivers build config 00:05:27.761 net/vmxnet3: not in enabled drivers build config 00:05:27.761 raw/*: missing internal dependency, "rawdev" 00:05:27.761 crypto/armv8: not in enabled drivers build config 00:05:27.761 crypto/bcmfs: not in enabled drivers build config 00:05:27.761 crypto/caam_jr: not in enabled drivers build config 00:05:27.761 crypto/ccp: not in enabled drivers build config 00:05:27.761 crypto/cnxk: not in enabled drivers build config 00:05:27.761 crypto/dpaa_sec: not in enabled drivers build config 00:05:27.761 crypto/dpaa2_sec: not in enabled drivers build config 00:05:27.761 crypto/ipsec_mb: not in enabled drivers build config 00:05:27.761 crypto/mlx5: not in enabled drivers build config 00:05:27.761 crypto/mvsam: not in enabled drivers build config 00:05:27.761 crypto/nitrox: not in enabled drivers build config 00:05:27.761 crypto/null: not in enabled drivers build config 00:05:27.761 crypto/octeontx: not in enabled drivers build config 00:05:27.761 crypto/openssl: not in enabled drivers build config 00:05:27.761 crypto/scheduler: not in enabled drivers build config 00:05:27.761 crypto/uadk: not in enabled drivers build config 00:05:27.761 crypto/virtio: not in enabled drivers build config 00:05:27.761 compress/isal: not in enabled drivers build config 00:05:27.761 compress/mlx5: not in enabled drivers build config 00:05:27.761 compress/nitrox: not in enabled drivers build config 00:05:27.761 compress/octeontx: not in enabled drivers build config 00:05:27.761 compress/zlib: not in enabled drivers build config 00:05:27.761 regex/*: missing internal dependency, "regexdev" 00:05:27.761 ml/*: missing internal dependency, "mldev" 00:05:27.761 vdpa/ifc: not in enabled drivers build config 00:05:27.761 vdpa/mlx5: not in enabled drivers build config 00:05:27.761 vdpa/nfp: not in enabled drivers build config 00:05:27.761 vdpa/sfc: not in enabled drivers build config 00:05:27.761 event/*: missing internal dependency, "eventdev" 00:05:27.761 baseband/*: missing internal dependency, "bbdev" 00:05:27.761 gpu/*: missing internal dependency, "gpudev" 00:05:27.761 00:05:27.761 00:05:27.761 Build targets in project: 85 00:05:27.761 00:05:27.761 DPDK 24.03.0 00:05:27.761 00:05:27.761 User defined options 00:05:27.761 buildtype : debug 00:05:27.761 default_library : shared 00:05:27.761 libdir : lib 00:05:27.761 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:27.761 b_sanitize : address 00:05:27.761 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:27.761 c_link_args : 00:05:27.761 cpu_instruction_set: native 00:05:27.761 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:27.761 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:27.761 enable_docs : false 00:05:27.761 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:27.761 enable_kmods : false 00:05:27.761 max_lcores : 128 00:05:27.761 tests : false 00:05:27.761 00:05:27.761 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:28.020 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:28.278 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:28.278 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:28.278 [3/268] Linking static target lib/librte_kvargs.a 00:05:28.278 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:28.278 [5/268] Linking static target lib/librte_log.a 00:05:28.278 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:28.843 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:28.843 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:28.843 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:28.843 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:28.843 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:28.843 [12/268] Linking static target lib/librte_telemetry.a 00:05:28.843 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:28.843 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.843 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:28.844 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:29.101 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:29.101 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:29.669 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:29.669 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:29.669 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:29.669 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.669 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:29.669 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:29.669 [25/268] Linking target lib/librte_log.so.24.1 00:05:29.669 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:29.669 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:29.928 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:29.928 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.928 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:29.928 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:30.186 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:30.186 [33/268] Linking target lib/librte_kvargs.so.24.1 00:05:30.186 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:30.186 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:30.444 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:30.444 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:30.444 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:30.444 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:30.444 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:30.444 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:30.444 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:30.444 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:30.444 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:30.702 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:30.702 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:30.960 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:30.960 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:30.960 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:31.218 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:31.218 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:31.218 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:31.477 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:31.477 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:31.477 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:31.477 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:31.477 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:31.477 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:31.735 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:31.735 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:31.735 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:31.735 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:31.735 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:31.993 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:31.993 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:31.993 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:31.993 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:32.251 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:32.251 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:32.251 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:32.510 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:32.510 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:32.510 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:32.510 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:32.510 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:32.510 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:32.769 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:32.769 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:32.769 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:32.769 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:32.769 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:33.028 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:33.028 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:33.028 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:33.287 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:33.287 [86/268] Linking static target lib/librte_eal.a 00:05:33.287 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:33.545 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:33.545 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:33.545 [90/268] Linking static target lib/librte_rcu.a 00:05:33.545 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:33.545 [92/268] Linking static target lib/librte_ring.a 00:05:33.545 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:33.545 [94/268] Linking static target lib/librte_mempool.a 00:05:33.545 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:33.545 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:33.803 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:33.803 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:33.803 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:34.061 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.061 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.061 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:34.061 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:34.061 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:34.319 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:34.319 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:34.319 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:34.319 [108/268] Linking static target lib/librte_mbuf.a 00:05:34.319 [109/268] Linking static target lib/librte_net.a 00:05:34.577 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:34.577 [111/268] Linking static target lib/librte_meter.a 00:05:34.577 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:34.835 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:34.835 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.835 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:34.835 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.093 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:35.093 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.093 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:35.659 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.659 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:35.659 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:35.659 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:35.916 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:35.916 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:35.916 [126/268] Linking static target lib/librte_pci.a 00:05:36.173 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:36.174 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:36.174 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:36.174 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:36.174 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:36.174 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:36.174 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:36.431 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:36.431 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:36.431 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:36.431 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:36.431 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:36.431 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.431 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:36.431 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:36.688 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:36.688 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:36.688 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:36.688 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:36.688 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:36.688 [147/268] Linking static target lib/librte_cmdline.a 00:05:36.954 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:37.222 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:37.222 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:37.222 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:37.516 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:37.516 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:37.516 [154/268] Linking static target lib/librte_ethdev.a 00:05:37.516 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:37.516 [156/268] Linking static target lib/librte_timer.a 00:05:37.773 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:37.773 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:38.030 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:38.030 [160/268] Linking static target lib/librte_compressdev.a 00:05:38.030 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:38.030 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:38.030 [163/268] Linking static target lib/librte_hash.a 00:05:38.288 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:38.288 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.288 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:38.288 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:38.288 [168/268] Linking static target lib/librte_dmadev.a 00:05:38.545 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:38.804 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.804 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:38.804 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:38.804 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:39.062 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:39.320 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.320 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:39.320 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:39.320 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:39.320 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:39.578 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.578 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:39.578 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.578 [183/268] Linking static target lib/librte_cryptodev.a 00:05:39.578 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:39.835 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:39.835 [186/268] Linking static target lib/librte_power.a 00:05:40.093 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:40.093 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:40.093 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:40.093 [190/268] Linking static target lib/librte_reorder.a 00:05:40.352 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:40.610 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:40.610 [193/268] Linking static target lib/librte_security.a 00:05:40.610 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:40.868 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.125 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:41.383 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.383 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:41.383 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:41.383 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.642 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:41.642 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:41.901 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:41.901 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:41.901 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:41.901 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:42.184 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:42.184 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:42.442 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:42.442 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:42.442 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:42.442 [212/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:42.442 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:42.442 [214/268] Linking static target drivers/librte_bus_vdev.a 00:05:42.700 [215/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.700 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:42.700 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:42.700 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:42.700 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:42.700 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:42.700 [221/268] Linking static target drivers/librte_bus_pci.a 00:05:42.957 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:42.957 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:42.957 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:42.957 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:42.957 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.215 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.148 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:47.429 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.687 [230/268] Linking target lib/librte_eal.so.24.1 00:05:47.687 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:47.687 [232/268] Linking target lib/librte_ring.so.24.1 00:05:47.687 [233/268] Linking target lib/librte_pci.so.24.1 00:05:47.687 [234/268] Linking target lib/librte_meter.so.24.1 00:05:47.687 [235/268] Linking target lib/librte_timer.so.24.1 00:05:47.945 [236/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:47.945 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:47.945 [238/268] Linking target lib/librte_dmadev.so.24.1 00:05:47.945 [239/268] Linking static target lib/librte_vhost.a 00:05:47.945 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:47.945 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:47.945 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:47.945 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:47.945 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:47.945 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:47.945 [246/268] Linking target lib/librte_mempool.so.24.1 00:05:47.945 [247/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:48.204 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.204 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:48.204 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:48.204 [251/268] Linking target lib/librte_mbuf.so.24.1 00:05:48.204 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:48.204 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:48.462 [254/268] Linking target lib/librte_net.so.24.1 00:05:48.462 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:48.462 [256/268] Linking target lib/librte_compressdev.so.24.1 00:05:48.462 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:05:48.462 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:48.462 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:48.462 [260/268] Linking target lib/librte_hash.so.24.1 00:05:48.462 [261/268] Linking target lib/librte_security.so.24.1 00:05:48.720 [262/268] Linking target lib/librte_cmdline.so.24.1 00:05:48.720 [263/268] Linking target lib/librte_ethdev.so.24.1 00:05:48.720 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:48.720 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:48.979 [266/268] Linking target lib/librte_power.so.24.1 00:05:50.356 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.356 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:50.356 INFO: autodetecting backend as ninja 00:05:50.356 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:16.933 CC lib/ut/ut.o 00:06:16.933 CC lib/ut_mock/mock.o 00:06:16.933 CC lib/log/log.o 00:06:16.933 CC lib/log/log_flags.o 00:06:16.933 CC lib/log/log_deprecated.o 00:06:16.933 LIB libspdk_ut_mock.a 00:06:16.933 LIB libspdk_ut.a 00:06:16.933 SO libspdk_ut_mock.so.6.0 00:06:16.933 SO libspdk_ut.so.2.0 00:06:16.933 LIB libspdk_log.a 00:06:16.933 SO libspdk_log.so.7.1 00:06:16.933 SYMLINK libspdk_ut_mock.so 00:06:16.933 SYMLINK libspdk_ut.so 00:06:16.933 SYMLINK libspdk_log.so 00:06:16.933 CC lib/ioat/ioat.o 00:06:16.933 CC lib/dma/dma.o 00:06:16.933 CC lib/util/base64.o 00:06:16.933 CC lib/util/crc16.o 00:06:16.933 CC lib/util/cpuset.o 00:06:16.933 CC lib/util/crc32c.o 00:06:16.933 CC lib/util/crc32.o 00:06:16.933 CC lib/util/bit_array.o 00:06:16.933 CXX lib/trace_parser/trace.o 00:06:16.933 CC lib/vfio_user/host/vfio_user_pci.o 00:06:16.933 CC lib/util/crc32_ieee.o 00:06:16.933 CC lib/util/crc64.o 00:06:16.933 CC lib/util/dif.o 00:06:16.933 CC lib/util/fd.o 00:06:16.933 CC lib/util/fd_group.o 00:06:16.933 LIB libspdk_dma.a 00:06:16.933 CC lib/util/file.o 00:06:16.933 CC lib/vfio_user/host/vfio_user.o 00:06:16.933 SO libspdk_dma.so.5.0 00:06:16.933 LIB libspdk_ioat.a 00:06:16.933 CC lib/util/hexlify.o 00:06:16.933 SO libspdk_ioat.so.7.0 00:06:16.933 SYMLINK libspdk_dma.so 00:06:16.933 CC lib/util/iov.o 00:06:16.933 CC lib/util/math.o 00:06:16.933 CC lib/util/net.o 00:06:16.933 SYMLINK libspdk_ioat.so 00:06:16.933 CC lib/util/pipe.o 00:06:16.933 CC lib/util/strerror_tls.o 00:06:16.933 CC lib/util/string.o 00:06:16.933 LIB libspdk_vfio_user.a 00:06:16.933 CC lib/util/uuid.o 00:06:16.933 CC lib/util/xor.o 00:06:16.933 SO libspdk_vfio_user.so.5.0 00:06:16.933 CC lib/util/zipf.o 00:06:16.933 CC lib/util/md5.o 00:06:16.933 SYMLINK libspdk_vfio_user.so 00:06:16.933 LIB libspdk_util.a 00:06:16.933 SO libspdk_util.so.10.1 00:06:16.933 LIB libspdk_trace_parser.a 00:06:16.933 SO libspdk_trace_parser.so.6.0 00:06:16.933 SYMLINK libspdk_util.so 00:06:16.933 SYMLINK libspdk_trace_parser.so 00:06:17.191 CC lib/conf/conf.o 00:06:17.191 CC lib/vmd/vmd.o 00:06:17.192 CC lib/vmd/led.o 00:06:17.192 CC lib/idxd/idxd.o 00:06:17.192 CC lib/idxd/idxd_kernel.o 00:06:17.192 CC lib/env_dpdk/env.o 00:06:17.192 CC lib/env_dpdk/memory.o 00:06:17.192 CC lib/json/json_parse.o 00:06:17.192 CC lib/idxd/idxd_user.o 00:06:17.192 CC lib/rdma_utils/rdma_utils.o 00:06:17.192 CC lib/env_dpdk/pci.o 00:06:17.192 CC lib/env_dpdk/init.o 00:06:17.192 LIB libspdk_conf.a 00:06:17.450 CC lib/json/json_util.o 00:06:17.450 SO libspdk_conf.so.6.0 00:06:17.450 CC lib/env_dpdk/threads.o 00:06:17.450 LIB libspdk_rdma_utils.a 00:06:17.450 SYMLINK libspdk_conf.so 00:06:17.450 CC lib/json/json_write.o 00:06:17.450 SO libspdk_rdma_utils.so.1.0 00:06:17.450 SYMLINK libspdk_rdma_utils.so 00:06:17.707 CC lib/env_dpdk/pci_ioat.o 00:06:17.707 CC lib/env_dpdk/pci_virtio.o 00:06:17.707 CC lib/env_dpdk/pci_vmd.o 00:06:17.707 CC lib/rdma_provider/common.o 00:06:17.707 CC lib/env_dpdk/pci_idxd.o 00:06:17.708 LIB libspdk_json.a 00:06:17.708 CC lib/env_dpdk/pci_event.o 00:06:17.708 SO libspdk_json.so.6.0 00:06:17.965 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:17.965 LIB libspdk_idxd.a 00:06:17.965 SYMLINK libspdk_json.so 00:06:17.965 CC lib/env_dpdk/sigbus_handler.o 00:06:17.965 CC lib/env_dpdk/pci_dpdk.o 00:06:17.965 SO libspdk_idxd.so.12.1 00:06:17.965 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:17.965 LIB libspdk_vmd.a 00:06:17.965 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:17.965 SO libspdk_vmd.so.6.0 00:06:17.965 SYMLINK libspdk_idxd.so 00:06:17.965 SYMLINK libspdk_vmd.so 00:06:17.965 LIB libspdk_rdma_provider.a 00:06:18.223 SO libspdk_rdma_provider.so.7.0 00:06:18.223 SYMLINK libspdk_rdma_provider.so 00:06:18.223 CC lib/jsonrpc/jsonrpc_server.o 00:06:18.223 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:18.223 CC lib/jsonrpc/jsonrpc_client.o 00:06:18.223 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:18.481 LIB libspdk_jsonrpc.a 00:06:18.739 SO libspdk_jsonrpc.so.6.0 00:06:18.739 SYMLINK libspdk_jsonrpc.so 00:06:18.997 LIB libspdk_env_dpdk.a 00:06:18.997 SO libspdk_env_dpdk.so.15.1 00:06:19.254 CC lib/rpc/rpc.o 00:06:19.255 SYMLINK libspdk_env_dpdk.so 00:06:19.512 LIB libspdk_rpc.a 00:06:19.512 SO libspdk_rpc.so.6.0 00:06:19.512 SYMLINK libspdk_rpc.so 00:06:20.081 CC lib/notify/notify_rpc.o 00:06:20.081 CC lib/notify/notify.o 00:06:20.081 CC lib/keyring/keyring_rpc.o 00:06:20.081 CC lib/keyring/keyring.o 00:06:20.081 CC lib/trace/trace_rpc.o 00:06:20.081 CC lib/trace/trace_flags.o 00:06:20.081 CC lib/trace/trace.o 00:06:20.081 LIB libspdk_notify.a 00:06:20.081 SO libspdk_notify.so.6.0 00:06:20.339 LIB libspdk_keyring.a 00:06:20.339 SYMLINK libspdk_notify.so 00:06:20.339 LIB libspdk_trace.a 00:06:20.339 SO libspdk_keyring.so.2.0 00:06:20.339 SO libspdk_trace.so.11.0 00:06:20.339 SYMLINK libspdk_keyring.so 00:06:20.339 SYMLINK libspdk_trace.so 00:06:20.905 CC lib/sock/sock.o 00:06:20.905 CC lib/sock/sock_rpc.o 00:06:20.905 CC lib/thread/thread.o 00:06:20.905 CC lib/thread/iobuf.o 00:06:21.477 LIB libspdk_sock.a 00:06:21.477 SO libspdk_sock.so.10.0 00:06:21.477 SYMLINK libspdk_sock.so 00:06:22.043 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:22.043 CC lib/nvme/nvme_fabric.o 00:06:22.043 CC lib/nvme/nvme_ctrlr.o 00:06:22.043 CC lib/nvme/nvme_ns_cmd.o 00:06:22.043 CC lib/nvme/nvme_ns.o 00:06:22.043 CC lib/nvme/nvme_pcie.o 00:06:22.043 CC lib/nvme/nvme_pcie_common.o 00:06:22.043 CC lib/nvme/nvme.o 00:06:22.043 CC lib/nvme/nvme_qpair.o 00:06:22.611 CC lib/nvme/nvme_quirks.o 00:06:22.611 CC lib/nvme/nvme_transport.o 00:06:22.869 CC lib/nvme/nvme_discovery.o 00:06:22.869 LIB libspdk_thread.a 00:06:22.869 SO libspdk_thread.so.11.0 00:06:22.869 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:22.869 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:23.128 SYMLINK libspdk_thread.so 00:06:23.128 CC lib/nvme/nvme_tcp.o 00:06:23.128 CC lib/nvme/nvme_opal.o 00:06:23.128 CC lib/nvme/nvme_io_msg.o 00:06:23.387 CC lib/nvme/nvme_poll_group.o 00:06:23.387 CC lib/nvme/nvme_zns.o 00:06:23.645 CC lib/nvme/nvme_stubs.o 00:06:23.645 CC lib/nvme/nvme_auth.o 00:06:23.903 CC lib/nvme/nvme_cuse.o 00:06:23.903 CC lib/accel/accel.o 00:06:23.903 CC lib/nvme/nvme_rdma.o 00:06:23.903 CC lib/blob/blobstore.o 00:06:23.903 CC lib/accel/accel_rpc.o 00:06:24.220 CC lib/blob/request.o 00:06:24.221 CC lib/accel/accel_sw.o 00:06:24.221 CC lib/init/json_config.o 00:06:24.479 CC lib/init/subsystem.o 00:06:24.736 CC lib/init/subsystem_rpc.o 00:06:24.736 CC lib/init/rpc.o 00:06:24.736 CC lib/blob/zeroes.o 00:06:24.736 CC lib/blob/blob_bs_dev.o 00:06:24.993 LIB libspdk_init.a 00:06:24.993 SO libspdk_init.so.6.0 00:06:24.993 SYMLINK libspdk_init.so 00:06:24.993 CC lib/virtio/virtio_vhost_user.o 00:06:24.993 CC lib/virtio/virtio.o 00:06:24.993 CC lib/virtio/virtio_vfio_user.o 00:06:24.993 CC lib/virtio/virtio_pci.o 00:06:25.252 CC lib/fsdev/fsdev.o 00:06:25.252 CC lib/fsdev/fsdev_io.o 00:06:25.252 LIB libspdk_accel.a 00:06:25.252 CC lib/event/app.o 00:06:25.252 SO libspdk_accel.so.16.0 00:06:25.252 CC lib/fsdev/fsdev_rpc.o 00:06:25.509 CC lib/event/reactor.o 00:06:25.509 SYMLINK libspdk_accel.so 00:06:25.509 CC lib/event/log_rpc.o 00:06:25.509 CC lib/event/app_rpc.o 00:06:25.509 LIB libspdk_virtio.a 00:06:25.509 SO libspdk_virtio.so.7.0 00:06:25.509 CC lib/event/scheduler_static.o 00:06:25.768 LIB libspdk_nvme.a 00:06:25.768 SYMLINK libspdk_virtio.so 00:06:25.768 CC lib/bdev/bdev.o 00:06:25.768 CC lib/bdev/bdev_rpc.o 00:06:25.768 CC lib/bdev/bdev_zone.o 00:06:25.768 CC lib/bdev/part.o 00:06:25.768 CC lib/bdev/scsi_nvme.o 00:06:26.025 SO libspdk_nvme.so.15.0 00:06:26.025 LIB libspdk_event.a 00:06:26.025 SO libspdk_event.so.14.0 00:06:26.025 LIB libspdk_fsdev.a 00:06:26.025 SYMLINK libspdk_event.so 00:06:26.284 SO libspdk_fsdev.so.2.0 00:06:26.284 SYMLINK libspdk_fsdev.so 00:06:26.284 SYMLINK libspdk_nvme.so 00:06:26.850 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:27.536 LIB libspdk_fuse_dispatcher.a 00:06:27.536 SO libspdk_fuse_dispatcher.so.1.0 00:06:27.802 SYMLINK libspdk_fuse_dispatcher.so 00:06:28.737 LIB libspdk_blob.a 00:06:28.737 SO libspdk_blob.so.12.0 00:06:28.737 SYMLINK libspdk_blob.so 00:06:29.303 CC lib/lvol/lvol.o 00:06:29.303 CC lib/blobfs/tree.o 00:06:29.303 CC lib/blobfs/blobfs.o 00:06:29.303 LIB libspdk_bdev.a 00:06:29.303 SO libspdk_bdev.so.17.0 00:06:29.561 SYMLINK libspdk_bdev.so 00:06:29.821 CC lib/scsi/dev.o 00:06:29.821 CC lib/scsi/port.o 00:06:29.821 CC lib/scsi/scsi.o 00:06:29.821 CC lib/scsi/lun.o 00:06:29.821 CC lib/ftl/ftl_core.o 00:06:29.821 CC lib/nbd/nbd.o 00:06:29.821 CC lib/ublk/ublk.o 00:06:29.821 CC lib/nvmf/ctrlr.o 00:06:30.080 CC lib/ublk/ublk_rpc.o 00:06:30.080 CC lib/scsi/scsi_bdev.o 00:06:30.080 CC lib/scsi/scsi_pr.o 00:06:30.080 CC lib/scsi/scsi_rpc.o 00:06:30.080 CC lib/scsi/task.o 00:06:30.080 LIB libspdk_blobfs.a 00:06:30.339 SO libspdk_blobfs.so.11.0 00:06:30.339 CC lib/ftl/ftl_init.o 00:06:30.339 CC lib/nbd/nbd_rpc.o 00:06:30.339 LIB libspdk_lvol.a 00:06:30.339 SYMLINK libspdk_blobfs.so 00:06:30.339 CC lib/nvmf/ctrlr_discovery.o 00:06:30.339 CC lib/nvmf/ctrlr_bdev.o 00:06:30.339 SO libspdk_lvol.so.11.0 00:06:30.339 CC lib/nvmf/subsystem.o 00:06:30.339 SYMLINK libspdk_lvol.so 00:06:30.339 CC lib/nvmf/nvmf.o 00:06:30.339 CC lib/nvmf/nvmf_rpc.o 00:06:30.598 LIB libspdk_nbd.a 00:06:30.598 SO libspdk_nbd.so.7.0 00:06:30.598 CC lib/ftl/ftl_layout.o 00:06:30.598 LIB libspdk_ublk.a 00:06:30.598 LIB libspdk_scsi.a 00:06:30.598 SYMLINK libspdk_nbd.so 00:06:30.598 SO libspdk_ublk.so.3.0 00:06:30.598 CC lib/ftl/ftl_debug.o 00:06:30.598 SO libspdk_scsi.so.9.0 00:06:30.598 SYMLINK libspdk_ublk.so 00:06:30.598 CC lib/ftl/ftl_io.o 00:06:30.856 SYMLINK libspdk_scsi.so 00:06:30.856 CC lib/ftl/ftl_sb.o 00:06:30.856 CC lib/ftl/ftl_l2p.o 00:06:30.856 CC lib/ftl/ftl_l2p_flat.o 00:06:30.856 CC lib/ftl/ftl_nv_cache.o 00:06:30.856 CC lib/nvmf/transport.o 00:06:31.115 CC lib/ftl/ftl_band.o 00:06:31.115 CC lib/nvmf/tcp.o 00:06:31.115 CC lib/nvmf/stubs.o 00:06:31.115 CC lib/iscsi/conn.o 00:06:31.373 CC lib/ftl/ftl_band_ops.o 00:06:31.632 CC lib/ftl/ftl_writer.o 00:06:31.632 CC lib/vhost/vhost.o 00:06:31.632 CC lib/vhost/vhost_rpc.o 00:06:31.632 CC lib/vhost/vhost_scsi.o 00:06:31.891 CC lib/vhost/vhost_blk.o 00:06:31.891 CC lib/vhost/rte_vhost_user.o 00:06:31.891 CC lib/iscsi/init_grp.o 00:06:31.891 CC lib/ftl/ftl_rq.o 00:06:31.891 CC lib/ftl/ftl_reloc.o 00:06:32.149 CC lib/ftl/ftl_l2p_cache.o 00:06:32.149 CC lib/iscsi/iscsi.o 00:06:32.149 CC lib/iscsi/param.o 00:06:32.408 CC lib/iscsi/portal_grp.o 00:06:32.408 CC lib/nvmf/mdns_server.o 00:06:32.408 CC lib/iscsi/tgt_node.o 00:06:32.667 CC lib/iscsi/iscsi_subsystem.o 00:06:32.667 CC lib/ftl/ftl_p2l.o 00:06:32.667 CC lib/ftl/ftl_p2l_log.o 00:06:32.667 CC lib/ftl/mngt/ftl_mngt.o 00:06:32.926 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:32.926 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:32.926 CC lib/nvmf/rdma.o 00:06:32.926 CC lib/iscsi/iscsi_rpc.o 00:06:33.185 CC lib/iscsi/task.o 00:06:33.185 LIB libspdk_vhost.a 00:06:33.185 CC lib/nvmf/auth.o 00:06:33.185 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:33.185 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:33.185 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:33.185 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:33.185 SO libspdk_vhost.so.8.0 00:06:33.185 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:33.444 SYMLINK libspdk_vhost.so 00:06:33.444 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:33.445 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:33.445 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:33.445 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:33.445 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:33.445 CC lib/ftl/utils/ftl_conf.o 00:06:33.445 CC lib/ftl/utils/ftl_md.o 00:06:33.445 CC lib/ftl/utils/ftl_mempool.o 00:06:33.703 CC lib/ftl/utils/ftl_bitmap.o 00:06:33.703 CC lib/ftl/utils/ftl_property.o 00:06:33.703 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:33.703 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:33.703 LIB libspdk_iscsi.a 00:06:33.703 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:33.963 SO libspdk_iscsi.so.8.0 00:06:33.963 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:33.963 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:33.963 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:33.963 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:33.963 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:33.963 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:33.963 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:33.963 SYMLINK libspdk_iscsi.so 00:06:33.963 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:33.963 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:33.963 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:34.222 CC lib/ftl/base/ftl_base_dev.o 00:06:34.222 CC lib/ftl/base/ftl_base_bdev.o 00:06:34.222 CC lib/ftl/ftl_trace.o 00:06:34.481 LIB libspdk_ftl.a 00:06:34.741 SO libspdk_ftl.so.9.0 00:06:35.310 SYMLINK libspdk_ftl.so 00:06:35.878 LIB libspdk_nvmf.a 00:06:35.878 SO libspdk_nvmf.so.20.0 00:06:36.185 SYMLINK libspdk_nvmf.so 00:06:36.755 CC module/env_dpdk/env_dpdk_rpc.o 00:06:36.755 CC module/accel/iaa/accel_iaa.o 00:06:36.755 CC module/accel/dsa/accel_dsa.o 00:06:36.755 CC module/accel/error/accel_error.o 00:06:36.755 CC module/keyring/file/keyring.o 00:06:36.755 CC module/blob/bdev/blob_bdev.o 00:06:36.755 CC module/accel/ioat/accel_ioat.o 00:06:36.755 CC module/sock/posix/posix.o 00:06:36.755 CC module/fsdev/aio/fsdev_aio.o 00:06:36.755 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:36.755 LIB libspdk_env_dpdk_rpc.a 00:06:36.755 SO libspdk_env_dpdk_rpc.so.6.0 00:06:37.014 SYMLINK libspdk_env_dpdk_rpc.so 00:06:37.014 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:37.014 CC module/keyring/file/keyring_rpc.o 00:06:37.014 CC module/accel/error/accel_error_rpc.o 00:06:37.014 CC module/accel/iaa/accel_iaa_rpc.o 00:06:37.014 CC module/accel/ioat/accel_ioat_rpc.o 00:06:37.014 LIB libspdk_scheduler_dynamic.a 00:06:37.014 SO libspdk_scheduler_dynamic.so.4.0 00:06:37.014 CC module/fsdev/aio/linux_aio_mgr.o 00:06:37.014 LIB libspdk_blob_bdev.a 00:06:37.014 CC module/accel/dsa/accel_dsa_rpc.o 00:06:37.015 LIB libspdk_keyring_file.a 00:06:37.015 SO libspdk_blob_bdev.so.12.0 00:06:37.015 SYMLINK libspdk_scheduler_dynamic.so 00:06:37.015 LIB libspdk_accel_error.a 00:06:37.015 SO libspdk_keyring_file.so.2.0 00:06:37.015 LIB libspdk_accel_ioat.a 00:06:37.273 LIB libspdk_accel_iaa.a 00:06:37.273 SO libspdk_accel_error.so.2.0 00:06:37.273 SO libspdk_accel_ioat.so.6.0 00:06:37.273 SYMLINK libspdk_blob_bdev.so 00:06:37.273 SO libspdk_accel_iaa.so.3.0 00:06:37.273 SYMLINK libspdk_keyring_file.so 00:06:37.273 SYMLINK libspdk_accel_ioat.so 00:06:37.273 SYMLINK libspdk_accel_error.so 00:06:37.273 LIB libspdk_accel_dsa.a 00:06:37.273 SYMLINK libspdk_accel_iaa.so 00:06:37.273 SO libspdk_accel_dsa.so.5.0 00:06:37.273 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:37.273 SYMLINK libspdk_accel_dsa.so 00:06:37.533 CC module/keyring/linux/keyring.o 00:06:37.533 CC module/scheduler/gscheduler/gscheduler.o 00:06:37.533 LIB libspdk_scheduler_dpdk_governor.a 00:06:37.533 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:37.533 CC module/bdev/error/vbdev_error.o 00:06:37.533 CC module/bdev/gpt/gpt.o 00:06:37.533 CC module/bdev/delay/vbdev_delay.o 00:06:37.533 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:37.533 CC module/bdev/gpt/vbdev_gpt.o 00:06:37.533 CC module/blobfs/bdev/blobfs_bdev.o 00:06:37.533 CC module/bdev/lvol/vbdev_lvol.o 00:06:37.533 CC module/keyring/linux/keyring_rpc.o 00:06:37.533 LIB libspdk_scheduler_gscheduler.a 00:06:37.533 LIB libspdk_sock_posix.a 00:06:37.533 SO libspdk_scheduler_gscheduler.so.4.0 00:06:37.533 LIB libspdk_fsdev_aio.a 00:06:37.792 SO libspdk_sock_posix.so.6.0 00:06:37.792 SO libspdk_fsdev_aio.so.1.0 00:06:37.792 SYMLINK libspdk_scheduler_gscheduler.so 00:06:37.792 LIB libspdk_keyring_linux.a 00:06:37.792 CC module/bdev/error/vbdev_error_rpc.o 00:06:37.792 SYMLINK libspdk_sock_posix.so 00:06:37.792 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:37.792 SO libspdk_keyring_linux.so.1.0 00:06:37.792 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:37.792 SYMLINK libspdk_fsdev_aio.so 00:06:37.792 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:37.792 SYMLINK libspdk_keyring_linux.so 00:06:37.792 LIB libspdk_bdev_gpt.a 00:06:37.792 LIB libspdk_bdev_error.a 00:06:38.051 SO libspdk_bdev_error.so.6.0 00:06:38.051 SO libspdk_bdev_gpt.so.6.0 00:06:38.051 CC module/bdev/malloc/bdev_malloc.o 00:06:38.051 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:38.051 LIB libspdk_blobfs_bdev.a 00:06:38.051 LIB libspdk_bdev_delay.a 00:06:38.051 CC module/bdev/null/bdev_null.o 00:06:38.051 SO libspdk_blobfs_bdev.so.6.0 00:06:38.051 SYMLINK libspdk_bdev_gpt.so 00:06:38.051 SO libspdk_bdev_delay.so.6.0 00:06:38.051 SYMLINK libspdk_bdev_error.so 00:06:38.051 CC module/bdev/null/bdev_null_rpc.o 00:06:38.051 CC module/bdev/nvme/bdev_nvme.o 00:06:38.051 SYMLINK libspdk_blobfs_bdev.so 00:06:38.051 SYMLINK libspdk_bdev_delay.so 00:06:38.051 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:38.051 CC module/bdev/nvme/nvme_rpc.o 00:06:38.051 CC module/bdev/nvme/bdev_mdns_client.o 00:06:38.311 CC module/bdev/nvme/vbdev_opal.o 00:06:38.311 LIB libspdk_bdev_lvol.a 00:06:38.311 CC module/bdev/passthru/vbdev_passthru.o 00:06:38.311 SO libspdk_bdev_lvol.so.6.0 00:06:38.311 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:38.311 LIB libspdk_bdev_null.a 00:06:38.311 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:38.311 SYMLINK libspdk_bdev_lvol.so 00:06:38.311 SO libspdk_bdev_null.so.6.0 00:06:38.311 LIB libspdk_bdev_malloc.a 00:06:38.311 SYMLINK libspdk_bdev_null.so 00:06:38.311 SO libspdk_bdev_malloc.so.6.0 00:06:38.569 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:38.569 SYMLINK libspdk_bdev_malloc.so 00:06:38.569 CC module/bdev/raid/bdev_raid.o 00:06:38.569 CC module/bdev/split/vbdev_split.o 00:06:38.569 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:38.569 LIB libspdk_bdev_passthru.a 00:06:38.569 CC module/bdev/aio/bdev_aio.o 00:06:38.569 CC module/bdev/xnvme/bdev_xnvme.o 00:06:38.828 CC module/bdev/ftl/bdev_ftl.o 00:06:38.828 SO libspdk_bdev_passthru.so.6.0 00:06:38.828 CC module/bdev/iscsi/bdev_iscsi.o 00:06:38.828 SYMLINK libspdk_bdev_passthru.so 00:06:38.828 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:38.828 CC module/bdev/split/vbdev_split_rpc.o 00:06:39.087 CC module/bdev/aio/bdev_aio_rpc.o 00:06:39.087 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:39.087 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:39.087 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:39.087 LIB libspdk_bdev_split.a 00:06:39.087 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:39.087 CC module/bdev/raid/bdev_raid_rpc.o 00:06:39.087 SO libspdk_bdev_split.so.6.0 00:06:39.087 SYMLINK libspdk_bdev_split.so 00:06:39.087 CC module/bdev/raid/bdev_raid_sb.o 00:06:39.087 LIB libspdk_bdev_xnvme.a 00:06:39.087 LIB libspdk_bdev_aio.a 00:06:39.087 LIB libspdk_bdev_iscsi.a 00:06:39.087 SO libspdk_bdev_xnvme.so.3.0 00:06:39.346 SO libspdk_bdev_iscsi.so.6.0 00:06:39.346 SO libspdk_bdev_aio.so.6.0 00:06:39.346 LIB libspdk_bdev_zone_block.a 00:06:39.346 SYMLINK libspdk_bdev_xnvme.so 00:06:39.346 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:39.346 LIB libspdk_bdev_ftl.a 00:06:39.346 SO libspdk_bdev_zone_block.so.6.0 00:06:39.346 SYMLINK libspdk_bdev_iscsi.so 00:06:39.346 SYMLINK libspdk_bdev_aio.so 00:06:39.346 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:39.346 CC module/bdev/raid/raid0.o 00:06:39.346 CC module/bdev/raid/raid1.o 00:06:39.346 SO libspdk_bdev_ftl.so.6.0 00:06:39.346 SYMLINK libspdk_bdev_zone_block.so 00:06:39.346 CC module/bdev/raid/concat.o 00:06:39.346 SYMLINK libspdk_bdev_ftl.so 00:06:39.604 LIB libspdk_bdev_virtio.a 00:06:39.862 LIB libspdk_bdev_raid.a 00:06:39.862 SO libspdk_bdev_virtio.so.6.0 00:06:39.862 SO libspdk_bdev_raid.so.6.0 00:06:39.862 SYMLINK libspdk_bdev_virtio.so 00:06:39.862 SYMLINK libspdk_bdev_raid.so 00:06:41.770 LIB libspdk_bdev_nvme.a 00:06:41.770 SO libspdk_bdev_nvme.so.7.1 00:06:41.770 SYMLINK libspdk_bdev_nvme.so 00:06:42.336 CC module/event/subsystems/sock/sock.o 00:06:42.336 CC module/event/subsystems/keyring/keyring.o 00:06:42.336 CC module/event/subsystems/iobuf/iobuf.o 00:06:42.336 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:42.336 CC module/event/subsystems/scheduler/scheduler.o 00:06:42.336 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:42.336 CC module/event/subsystems/vmd/vmd.o 00:06:42.336 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:42.336 CC module/event/subsystems/fsdev/fsdev.o 00:06:42.595 LIB libspdk_event_keyring.a 00:06:42.595 LIB libspdk_event_scheduler.a 00:06:42.595 LIB libspdk_event_sock.a 00:06:42.595 LIB libspdk_event_vmd.a 00:06:42.595 LIB libspdk_event_fsdev.a 00:06:42.595 LIB libspdk_event_iobuf.a 00:06:42.595 SO libspdk_event_keyring.so.1.0 00:06:42.595 SO libspdk_event_scheduler.so.4.0 00:06:42.595 SO libspdk_event_sock.so.5.0 00:06:42.595 SO libspdk_event_fsdev.so.1.0 00:06:42.595 SO libspdk_event_vmd.so.6.0 00:06:42.595 LIB libspdk_event_vhost_blk.a 00:06:42.595 SO libspdk_event_iobuf.so.3.0 00:06:42.595 SO libspdk_event_vhost_blk.so.3.0 00:06:42.595 SYMLINK libspdk_event_sock.so 00:06:42.595 SYMLINK libspdk_event_scheduler.so 00:06:42.595 SYMLINK libspdk_event_fsdev.so 00:06:42.595 SYMLINK libspdk_event_keyring.so 00:06:42.595 SYMLINK libspdk_event_vmd.so 00:06:42.595 SYMLINK libspdk_event_iobuf.so 00:06:42.595 SYMLINK libspdk_event_vhost_blk.so 00:06:43.162 CC module/event/subsystems/accel/accel.o 00:06:43.420 LIB libspdk_event_accel.a 00:06:43.420 SO libspdk_event_accel.so.6.0 00:06:43.420 SYMLINK libspdk_event_accel.so 00:06:43.989 CC module/event/subsystems/bdev/bdev.o 00:06:43.989 LIB libspdk_event_bdev.a 00:06:43.989 SO libspdk_event_bdev.so.6.0 00:06:44.250 SYMLINK libspdk_event_bdev.so 00:06:44.510 CC module/event/subsystems/scsi/scsi.o 00:06:44.510 CC module/event/subsystems/nbd/nbd.o 00:06:44.510 CC module/event/subsystems/ublk/ublk.o 00:06:44.510 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:44.510 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:44.769 LIB libspdk_event_scsi.a 00:06:44.769 LIB libspdk_event_nbd.a 00:06:44.769 SO libspdk_event_scsi.so.6.0 00:06:44.769 SO libspdk_event_nbd.so.6.0 00:06:44.769 LIB libspdk_event_ublk.a 00:06:44.769 SYMLINK libspdk_event_nbd.so 00:06:44.769 SYMLINK libspdk_event_scsi.so 00:06:44.769 LIB libspdk_event_nvmf.a 00:06:44.769 SO libspdk_event_ublk.so.3.0 00:06:44.769 SO libspdk_event_nvmf.so.6.0 00:06:44.769 SYMLINK libspdk_event_ublk.so 00:06:45.029 SYMLINK libspdk_event_nvmf.so 00:06:45.288 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:45.288 CC module/event/subsystems/iscsi/iscsi.o 00:06:45.288 LIB libspdk_event_vhost_scsi.a 00:06:45.288 LIB libspdk_event_iscsi.a 00:06:45.548 SO libspdk_event_vhost_scsi.so.3.0 00:06:45.548 SO libspdk_event_iscsi.so.6.0 00:06:45.548 SYMLINK libspdk_event_vhost_scsi.so 00:06:45.548 SYMLINK libspdk_event_iscsi.so 00:06:45.807 SO libspdk.so.6.0 00:06:45.807 SYMLINK libspdk.so 00:06:46.065 TEST_HEADER include/spdk/accel.h 00:06:46.065 TEST_HEADER include/spdk/accel_module.h 00:06:46.065 TEST_HEADER include/spdk/assert.h 00:06:46.065 CXX app/trace/trace.o 00:06:46.065 TEST_HEADER include/spdk/barrier.h 00:06:46.065 TEST_HEADER include/spdk/base64.h 00:06:46.065 CC test/rpc_client/rpc_client_test.o 00:06:46.065 TEST_HEADER include/spdk/bdev.h 00:06:46.065 TEST_HEADER include/spdk/bdev_module.h 00:06:46.065 TEST_HEADER include/spdk/bdev_zone.h 00:06:46.065 TEST_HEADER include/spdk/bit_array.h 00:06:46.065 TEST_HEADER include/spdk/bit_pool.h 00:06:46.065 TEST_HEADER include/spdk/blob_bdev.h 00:06:46.065 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:46.065 TEST_HEADER include/spdk/blobfs.h 00:06:46.065 TEST_HEADER include/spdk/blob.h 00:06:46.065 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:46.065 TEST_HEADER include/spdk/conf.h 00:06:46.065 TEST_HEADER include/spdk/config.h 00:06:46.065 TEST_HEADER include/spdk/cpuset.h 00:06:46.065 TEST_HEADER include/spdk/crc16.h 00:06:46.065 TEST_HEADER include/spdk/crc32.h 00:06:46.065 TEST_HEADER include/spdk/crc64.h 00:06:46.065 TEST_HEADER include/spdk/dif.h 00:06:46.065 TEST_HEADER include/spdk/dma.h 00:06:46.065 TEST_HEADER include/spdk/endian.h 00:06:46.065 TEST_HEADER include/spdk/env_dpdk.h 00:06:46.065 TEST_HEADER include/spdk/env.h 00:06:46.065 TEST_HEADER include/spdk/event.h 00:06:46.065 TEST_HEADER include/spdk/fd_group.h 00:06:46.065 TEST_HEADER include/spdk/fd.h 00:06:46.065 TEST_HEADER include/spdk/file.h 00:06:46.065 TEST_HEADER include/spdk/fsdev.h 00:06:46.065 TEST_HEADER include/spdk/fsdev_module.h 00:06:46.325 TEST_HEADER include/spdk/ftl.h 00:06:46.325 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:46.325 TEST_HEADER include/spdk/gpt_spec.h 00:06:46.325 TEST_HEADER include/spdk/hexlify.h 00:06:46.325 TEST_HEADER include/spdk/histogram_data.h 00:06:46.325 TEST_HEADER include/spdk/idxd.h 00:06:46.325 CC examples/util/zipf/zipf.o 00:06:46.325 TEST_HEADER include/spdk/idxd_spec.h 00:06:46.325 CC test/thread/poller_perf/poller_perf.o 00:06:46.325 TEST_HEADER include/spdk/init.h 00:06:46.325 TEST_HEADER include/spdk/ioat.h 00:06:46.325 TEST_HEADER include/spdk/ioat_spec.h 00:06:46.325 TEST_HEADER include/spdk/iscsi_spec.h 00:06:46.325 CC examples/ioat/perf/perf.o 00:06:46.325 TEST_HEADER include/spdk/json.h 00:06:46.325 TEST_HEADER include/spdk/jsonrpc.h 00:06:46.325 TEST_HEADER include/spdk/keyring.h 00:06:46.325 TEST_HEADER include/spdk/keyring_module.h 00:06:46.325 TEST_HEADER include/spdk/likely.h 00:06:46.325 TEST_HEADER include/spdk/log.h 00:06:46.325 TEST_HEADER include/spdk/lvol.h 00:06:46.325 TEST_HEADER include/spdk/md5.h 00:06:46.325 TEST_HEADER include/spdk/memory.h 00:06:46.325 CC test/app/bdev_svc/bdev_svc.o 00:06:46.325 TEST_HEADER include/spdk/mmio.h 00:06:46.325 TEST_HEADER include/spdk/nbd.h 00:06:46.325 TEST_HEADER include/spdk/net.h 00:06:46.325 TEST_HEADER include/spdk/notify.h 00:06:46.325 TEST_HEADER include/spdk/nvme.h 00:06:46.325 TEST_HEADER include/spdk/nvme_intel.h 00:06:46.325 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:46.325 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:46.325 TEST_HEADER include/spdk/nvme_spec.h 00:06:46.325 TEST_HEADER include/spdk/nvme_zns.h 00:06:46.325 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:46.325 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:46.325 TEST_HEADER include/spdk/nvmf.h 00:06:46.325 CC test/dma/test_dma/test_dma.o 00:06:46.325 TEST_HEADER include/spdk/nvmf_spec.h 00:06:46.325 TEST_HEADER include/spdk/nvmf_transport.h 00:06:46.325 TEST_HEADER include/spdk/opal.h 00:06:46.325 TEST_HEADER include/spdk/opal_spec.h 00:06:46.325 TEST_HEADER include/spdk/pci_ids.h 00:06:46.325 TEST_HEADER include/spdk/pipe.h 00:06:46.325 TEST_HEADER include/spdk/queue.h 00:06:46.325 TEST_HEADER include/spdk/reduce.h 00:06:46.325 TEST_HEADER include/spdk/rpc.h 00:06:46.325 TEST_HEADER include/spdk/scheduler.h 00:06:46.325 TEST_HEADER include/spdk/scsi.h 00:06:46.325 TEST_HEADER include/spdk/scsi_spec.h 00:06:46.325 TEST_HEADER include/spdk/sock.h 00:06:46.325 CC test/env/mem_callbacks/mem_callbacks.o 00:06:46.325 TEST_HEADER include/spdk/stdinc.h 00:06:46.325 TEST_HEADER include/spdk/string.h 00:06:46.325 TEST_HEADER include/spdk/thread.h 00:06:46.325 TEST_HEADER include/spdk/trace.h 00:06:46.325 TEST_HEADER include/spdk/trace_parser.h 00:06:46.325 TEST_HEADER include/spdk/tree.h 00:06:46.325 TEST_HEADER include/spdk/ublk.h 00:06:46.325 TEST_HEADER include/spdk/util.h 00:06:46.325 TEST_HEADER include/spdk/uuid.h 00:06:46.325 TEST_HEADER include/spdk/version.h 00:06:46.325 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:46.325 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:46.325 TEST_HEADER include/spdk/vhost.h 00:06:46.325 TEST_HEADER include/spdk/vmd.h 00:06:46.325 TEST_HEADER include/spdk/xor.h 00:06:46.325 LINK rpc_client_test 00:06:46.325 TEST_HEADER include/spdk/zipf.h 00:06:46.325 LINK poller_perf 00:06:46.325 CXX test/cpp_headers/accel.o 00:06:46.325 LINK interrupt_tgt 00:06:46.325 LINK zipf 00:06:46.584 LINK bdev_svc 00:06:46.584 LINK spdk_trace 00:06:46.584 CXX test/cpp_headers/accel_module.o 00:06:46.584 LINK ioat_perf 00:06:46.584 CXX test/cpp_headers/assert.o 00:06:46.584 CXX test/cpp_headers/barrier.o 00:06:46.584 CC app/trace_record/trace_record.o 00:06:46.584 CXX test/cpp_headers/base64.o 00:06:46.843 CC examples/ioat/verify/verify.o 00:06:46.843 CXX test/cpp_headers/bdev.o 00:06:46.843 LINK mem_callbacks 00:06:46.843 LINK test_dma 00:06:46.843 CC test/event/event_perf/event_perf.o 00:06:46.843 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:46.843 CC test/event/reactor/reactor.o 00:06:47.101 CC examples/thread/thread/thread_ex.o 00:06:47.101 CC examples/sock/hello_world/hello_sock.o 00:06:47.101 CXX test/cpp_headers/bdev_module.o 00:06:47.101 LINK spdk_trace_record 00:06:47.101 LINK verify 00:06:47.101 LINK event_perf 00:06:47.101 CC test/env/vtophys/vtophys.o 00:06:47.101 CXX test/cpp_headers/bdev_zone.o 00:06:47.101 LINK reactor 00:06:47.359 LINK thread 00:06:47.359 LINK hello_sock 00:06:47.359 LINK vtophys 00:06:47.360 CXX test/cpp_headers/bit_array.o 00:06:47.360 CC test/event/reactor_perf/reactor_perf.o 00:06:47.360 CXX test/cpp_headers/bit_pool.o 00:06:47.360 CC app/nvmf_tgt/nvmf_main.o 00:06:47.360 CC examples/vmd/lsvmd/lsvmd.o 00:06:47.360 CC test/accel/dif/dif.o 00:06:47.360 LINK nvme_fuzz 00:06:47.618 LINK reactor_perf 00:06:47.618 CXX test/cpp_headers/blob_bdev.o 00:06:47.618 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:47.618 LINK lsvmd 00:06:47.618 LINK nvmf_tgt 00:06:47.618 CC test/event/app_repeat/app_repeat.o 00:06:47.618 CC examples/idxd/perf/perf.o 00:06:47.618 LINK env_dpdk_post_init 00:06:47.876 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:47.876 CXX test/cpp_headers/blobfs_bdev.o 00:06:47.876 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:47.876 LINK app_repeat 00:06:47.876 CC examples/vmd/led/led.o 00:06:47.876 CC test/event/scheduler/scheduler.o 00:06:47.876 CXX test/cpp_headers/blobfs.o 00:06:47.876 CC app/iscsi_tgt/iscsi_tgt.o 00:06:47.876 LINK led 00:06:48.134 CC test/env/memory/memory_ut.o 00:06:48.134 LINK hello_fsdev 00:06:48.134 LINK idxd_perf 00:06:48.134 CC app/spdk_tgt/spdk_tgt.o 00:06:48.134 LINK scheduler 00:06:48.134 LINK dif 00:06:48.134 CXX test/cpp_headers/blob.o 00:06:48.134 LINK iscsi_tgt 00:06:48.394 CXX test/cpp_headers/conf.o 00:06:48.394 LINK spdk_tgt 00:06:48.394 CC test/blobfs/mkfs/mkfs.o 00:06:48.394 CC test/app/histogram_perf/histogram_perf.o 00:06:48.394 CC app/spdk_lspci/spdk_lspci.o 00:06:48.394 CC test/env/pci/pci_ut.o 00:06:48.394 CXX test/cpp_headers/config.o 00:06:48.394 CXX test/cpp_headers/cpuset.o 00:06:48.657 CC examples/accel/perf/accel_perf.o 00:06:48.657 LINK histogram_perf 00:06:48.657 LINK spdk_lspci 00:06:48.657 LINK mkfs 00:06:48.657 CC test/app/jsoncat/jsoncat.o 00:06:48.657 CXX test/cpp_headers/crc16.o 00:06:48.657 CC test/app/stub/stub.o 00:06:48.657 CC app/spdk_nvme_perf/perf.o 00:06:48.914 LINK jsoncat 00:06:48.914 CXX test/cpp_headers/crc32.o 00:06:48.914 LINK pci_ut 00:06:48.914 LINK stub 00:06:49.171 CXX test/cpp_headers/crc64.o 00:06:49.171 LINK accel_perf 00:06:49.171 CC test/lvol/esnap/esnap.o 00:06:49.171 CC examples/blob/hello_world/hello_blob.o 00:06:49.171 CXX test/cpp_headers/dif.o 00:06:49.171 CC examples/nvme/hello_world/hello_world.o 00:06:49.171 CXX test/cpp_headers/dma.o 00:06:49.429 LINK hello_blob 00:06:49.429 CC examples/nvme/reconnect/reconnect.o 00:06:49.429 CXX test/cpp_headers/endian.o 00:06:49.429 LINK memory_ut 00:06:49.429 LINK hello_world 00:06:49.429 CC examples/blob/cli/blobcli.o 00:06:49.686 CC examples/bdev/hello_world/hello_bdev.o 00:06:49.686 CXX test/cpp_headers/env_dpdk.o 00:06:49.686 LINK spdk_nvme_perf 00:06:49.686 CXX test/cpp_headers/env.o 00:06:49.686 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:49.686 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:49.944 LINK hello_bdev 00:06:49.944 CXX test/cpp_headers/event.o 00:06:49.944 LINK reconnect 00:06:49.944 CXX test/cpp_headers/fd_group.o 00:06:49.944 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:49.944 CC app/spdk_nvme_identify/identify.o 00:06:49.944 LINK iscsi_fuzz 00:06:50.201 CXX test/cpp_headers/fd.o 00:06:50.201 CC examples/nvme/arbitration/arbitration.o 00:06:50.201 LINK blobcli 00:06:50.201 CC examples/bdev/bdevperf/bdevperf.o 00:06:50.201 CXX test/cpp_headers/file.o 00:06:50.201 CC examples/nvme/hotplug/hotplug.o 00:06:50.201 LINK vhost_fuzz 00:06:50.459 CXX test/cpp_headers/fsdev.o 00:06:50.459 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:50.459 CXX test/cpp_headers/fsdev_module.o 00:06:50.459 CC examples/nvme/abort/abort.o 00:06:50.460 LINK arbitration 00:06:50.460 LINK cmb_copy 00:06:50.460 LINK hotplug 00:06:50.460 LINK nvme_manage 00:06:50.460 CXX test/cpp_headers/ftl.o 00:06:50.716 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:50.716 CXX test/cpp_headers/fuse_dispatcher.o 00:06:50.716 CXX test/cpp_headers/gpt_spec.o 00:06:50.716 CXX test/cpp_headers/hexlify.o 00:06:50.716 CXX test/cpp_headers/histogram_data.o 00:06:50.716 LINK pmr_persistence 00:06:50.716 CXX test/cpp_headers/idxd.o 00:06:50.973 CC app/spdk_nvme_discover/discovery_aer.o 00:06:50.973 LINK abort 00:06:50.973 CXX test/cpp_headers/idxd_spec.o 00:06:50.973 CXX test/cpp_headers/init.o 00:06:50.973 LINK bdevperf 00:06:50.973 CXX test/cpp_headers/ioat.o 00:06:50.973 CC app/spdk_top/spdk_top.o 00:06:50.973 CXX test/cpp_headers/ioat_spec.o 00:06:50.973 LINK spdk_nvme_identify 00:06:50.973 CXX test/cpp_headers/iscsi_spec.o 00:06:50.973 LINK spdk_nvme_discover 00:06:51.230 CXX test/cpp_headers/json.o 00:06:51.230 CXX test/cpp_headers/jsonrpc.o 00:06:51.230 CXX test/cpp_headers/keyring.o 00:06:51.230 CC app/vhost/vhost.o 00:06:51.230 CXX test/cpp_headers/keyring_module.o 00:06:51.230 CXX test/cpp_headers/likely.o 00:06:51.489 CC app/spdk_dd/spdk_dd.o 00:06:51.489 CC test/nvme/aer/aer.o 00:06:51.489 LINK vhost 00:06:51.489 CC examples/nvmf/nvmf/nvmf.o 00:06:51.489 CC test/bdev/bdevio/bdevio.o 00:06:51.489 CXX test/cpp_headers/log.o 00:06:51.489 CC test/nvme/reset/reset.o 00:06:51.748 CC test/nvme/sgl/sgl.o 00:06:51.748 CXX test/cpp_headers/lvol.o 00:06:51.748 LINK spdk_dd 00:06:51.748 LINK aer 00:06:52.006 LINK reset 00:06:52.006 LINK nvmf 00:06:52.006 CXX test/cpp_headers/md5.o 00:06:52.006 LINK sgl 00:06:52.006 LINK spdk_top 00:06:52.006 CC app/fio/nvme/fio_plugin.o 00:06:52.006 LINK bdevio 00:06:52.006 CC test/nvme/e2edp/nvme_dp.o 00:06:52.264 CC test/nvme/overhead/overhead.o 00:06:52.264 CXX test/cpp_headers/memory.o 00:06:52.264 CC test/nvme/startup/startup.o 00:06:52.264 CC test/nvme/err_injection/err_injection.o 00:06:52.264 CC test/nvme/reserve/reserve.o 00:06:52.264 CC test/nvme/simple_copy/simple_copy.o 00:06:52.264 CXX test/cpp_headers/mmio.o 00:06:52.264 CC app/fio/bdev/fio_plugin.o 00:06:52.264 LINK nvme_dp 00:06:52.264 LINK startup 00:06:52.523 LINK err_injection 00:06:52.523 LINK reserve 00:06:52.523 LINK overhead 00:06:52.523 CXX test/cpp_headers/nbd.o 00:06:52.523 CXX test/cpp_headers/net.o 00:06:52.523 LINK simple_copy 00:06:52.523 CC test/nvme/connect_stress/connect_stress.o 00:06:52.782 CC test/nvme/boot_partition/boot_partition.o 00:06:52.782 LINK spdk_nvme 00:06:52.782 CXX test/cpp_headers/notify.o 00:06:52.782 CC test/nvme/compliance/nvme_compliance.o 00:06:52.782 CC test/nvme/fused_ordering/fused_ordering.o 00:06:52.782 LINK connect_stress 00:06:52.782 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:52.782 LINK boot_partition 00:06:52.782 CXX test/cpp_headers/nvme.o 00:06:52.782 CC test/nvme/fdp/fdp.o 00:06:52.782 CC test/nvme/cuse/cuse.o 00:06:53.042 LINK spdk_bdev 00:06:53.042 LINK fused_ordering 00:06:53.042 LINK doorbell_aers 00:06:53.042 CXX test/cpp_headers/nvme_intel.o 00:06:53.042 CXX test/cpp_headers/nvme_ocssd.o 00:06:53.042 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:53.042 CXX test/cpp_headers/nvme_spec.o 00:06:53.042 LINK nvme_compliance 00:06:53.300 CXX test/cpp_headers/nvme_zns.o 00:06:53.300 CXX test/cpp_headers/nvmf_cmd.o 00:06:53.300 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:53.300 CXX test/cpp_headers/nvmf.o 00:06:53.300 CXX test/cpp_headers/nvmf_spec.o 00:06:53.300 LINK fdp 00:06:53.300 CXX test/cpp_headers/nvmf_transport.o 00:06:53.300 CXX test/cpp_headers/opal.o 00:06:53.300 CXX test/cpp_headers/opal_spec.o 00:06:53.300 CXX test/cpp_headers/pci_ids.o 00:06:53.559 CXX test/cpp_headers/pipe.o 00:06:53.559 CXX test/cpp_headers/queue.o 00:06:53.559 CXX test/cpp_headers/reduce.o 00:06:53.559 CXX test/cpp_headers/rpc.o 00:06:53.559 CXX test/cpp_headers/scheduler.o 00:06:53.559 CXX test/cpp_headers/scsi.o 00:06:53.559 CXX test/cpp_headers/scsi_spec.o 00:06:53.559 CXX test/cpp_headers/sock.o 00:06:53.559 CXX test/cpp_headers/stdinc.o 00:06:53.560 CXX test/cpp_headers/string.o 00:06:53.560 CXX test/cpp_headers/thread.o 00:06:53.560 CXX test/cpp_headers/trace.o 00:06:53.560 CXX test/cpp_headers/trace_parser.o 00:06:53.818 CXX test/cpp_headers/tree.o 00:06:53.818 CXX test/cpp_headers/ublk.o 00:06:53.818 CXX test/cpp_headers/util.o 00:06:53.818 CXX test/cpp_headers/uuid.o 00:06:53.818 CXX test/cpp_headers/version.o 00:06:53.818 CXX test/cpp_headers/vfio_user_pci.o 00:06:53.818 CXX test/cpp_headers/vfio_user_spec.o 00:06:53.818 CXX test/cpp_headers/vhost.o 00:06:53.818 CXX test/cpp_headers/vmd.o 00:06:53.818 CXX test/cpp_headers/xor.o 00:06:53.818 CXX test/cpp_headers/zipf.o 00:06:54.386 LINK cuse 00:06:55.323 LINK esnap 00:06:55.890 00:06:55.890 real 1m44.321s 00:06:55.890 user 8m51.437s 00:06:55.890 sys 2m19.054s 00:06:55.890 20:25:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:55.890 20:25:03 make -- common/autotest_common.sh@10 -- $ set +x 00:06:55.890 ************************************ 00:06:55.890 END TEST make 00:06:55.890 ************************************ 00:06:55.890 20:25:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:55.890 20:25:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:55.890 20:25:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:55.890 20:25:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.890 20:25:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:55.890 20:25:03 -- pm/common@44 -- $ pid=5289 00:06:55.890 20:25:03 -- pm/common@50 -- $ kill -TERM 5289 00:06:55.890 20:25:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:55.890 20:25:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:55.890 20:25:03 -- pm/common@44 -- $ pid=5290 00:06:55.890 20:25:03 -- pm/common@50 -- $ kill -TERM 5290 00:06:55.890 20:25:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:55.890 20:25:03 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:56.150 20:25:04 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.150 20:25:04 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.150 20:25:04 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.150 20:25:04 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.150 20:25:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.150 20:25:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.150 20:25:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.150 20:25:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.150 20:25:04 -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.150 20:25:04 -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.150 20:25:04 -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.150 20:25:04 -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.150 20:25:04 -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.150 20:25:04 -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.150 20:25:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.150 20:25:04 -- scripts/common.sh@344 -- # case "$op" in 00:06:56.150 20:25:04 -- scripts/common.sh@345 -- # : 1 00:06:56.150 20:25:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.150 20:25:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.150 20:25:04 -- scripts/common.sh@365 -- # decimal 1 00:06:56.150 20:25:04 -- scripts/common.sh@353 -- # local d=1 00:06:56.150 20:25:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.150 20:25:04 -- scripts/common.sh@355 -- # echo 1 00:06:56.150 20:25:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.150 20:25:04 -- scripts/common.sh@366 -- # decimal 2 00:06:56.150 20:25:04 -- scripts/common.sh@353 -- # local d=2 00:06:56.150 20:25:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.150 20:25:04 -- scripts/common.sh@355 -- # echo 2 00:06:56.150 20:25:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.150 20:25:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.150 20:25:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.150 20:25:04 -- scripts/common.sh@368 -- # return 0 00:06:56.150 20:25:04 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.150 20:25:04 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.150 --rc genhtml_branch_coverage=1 00:06:56.150 --rc genhtml_function_coverage=1 00:06:56.150 --rc genhtml_legend=1 00:06:56.150 --rc geninfo_all_blocks=1 00:06:56.150 --rc geninfo_unexecuted_blocks=1 00:06:56.150 00:06:56.150 ' 00:06:56.150 20:25:04 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.151 --rc genhtml_branch_coverage=1 00:06:56.151 --rc genhtml_function_coverage=1 00:06:56.151 --rc genhtml_legend=1 00:06:56.151 --rc geninfo_all_blocks=1 00:06:56.151 --rc geninfo_unexecuted_blocks=1 00:06:56.151 00:06:56.151 ' 00:06:56.151 20:25:04 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.151 --rc genhtml_branch_coverage=1 00:06:56.151 --rc genhtml_function_coverage=1 00:06:56.151 --rc genhtml_legend=1 00:06:56.151 --rc geninfo_all_blocks=1 00:06:56.151 --rc geninfo_unexecuted_blocks=1 00:06:56.151 00:06:56.151 ' 00:06:56.151 20:25:04 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.151 --rc genhtml_branch_coverage=1 00:06:56.151 --rc genhtml_function_coverage=1 00:06:56.151 --rc genhtml_legend=1 00:06:56.151 --rc geninfo_all_blocks=1 00:06:56.151 --rc geninfo_unexecuted_blocks=1 00:06:56.151 00:06:56.151 ' 00:06:56.151 20:25:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.151 20:25:04 -- nvmf/common.sh@7 -- # uname -s 00:06:56.151 20:25:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.151 20:25:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.151 20:25:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.151 20:25:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.151 20:25:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.151 20:25:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.151 20:25:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.151 20:25:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.151 20:25:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.151 20:25:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.151 20:25:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5d97dcf-b1c6-43b0-8642-7f1ad1f07ee4 00:06:56.151 20:25:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=b5d97dcf-b1c6-43b0-8642-7f1ad1f07ee4 00:06:56.151 20:25:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.151 20:25:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.151 20:25:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:56.151 20:25:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.151 20:25:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.151 20:25:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.151 20:25:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.151 20:25:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.151 20:25:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.151 20:25:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.151 20:25:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.151 20:25:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.151 20:25:04 -- paths/export.sh@5 -- # export PATH 00:06:56.151 20:25:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.151 20:25:04 -- nvmf/common.sh@51 -- # : 0 00:06:56.151 20:25:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.151 20:25:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.151 20:25:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.151 20:25:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.151 20:25:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.151 20:25:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.151 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.151 20:25:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.151 20:25:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.151 20:25:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.151 20:25:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:56.151 20:25:04 -- spdk/autotest.sh@32 -- # uname -s 00:06:56.151 20:25:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:56.151 20:25:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:56.151 20:25:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:56.151 20:25:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:56.151 20:25:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:56.151 20:25:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:56.151 20:25:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:56.151 20:25:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:56.151 20:25:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54975 00:06:56.151 20:25:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:56.151 20:25:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:56.151 20:25:04 -- pm/common@17 -- # local monitor 00:06:56.151 20:25:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.151 20:25:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.151 20:25:04 -- pm/common@25 -- # sleep 1 00:06:56.151 20:25:04 -- pm/common@21 -- # date +%s 00:06:56.151 20:25:04 -- pm/common@21 -- # date +%s 00:06:56.151 20:25:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732566304 00:06:56.151 20:25:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732566304 00:06:56.411 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732566304_collect-vmstat.pm.log 00:06:56.411 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732566304_collect-cpu-load.pm.log 00:06:57.348 20:25:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:57.348 20:25:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:57.348 20:25:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.348 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:06:57.348 20:25:05 -- spdk/autotest.sh@59 -- # create_test_list 00:06:57.348 20:25:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:57.348 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:06:57.348 20:25:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:57.348 20:25:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:57.348 20:25:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:57.348 20:25:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:57.348 20:25:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:57.348 20:25:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:57.348 20:25:05 -- common/autotest_common.sh@1457 -- # uname 00:06:57.348 20:25:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:57.348 20:25:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:57.348 20:25:05 -- common/autotest_common.sh@1477 -- # uname 00:06:57.348 20:25:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:57.348 20:25:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:57.348 20:25:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:57.348 lcov: LCOV version 1.15 00:06:57.348 20:25:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:12.225 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:12.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:30.313 20:25:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:30.313 20:25:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.313 20:25:36 -- common/autotest_common.sh@10 -- # set +x 00:07:30.313 20:25:36 -- spdk/autotest.sh@78 -- # rm -f 00:07:30.313 20:25:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:30.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.313 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:30.313 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:30.313 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:30.313 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:30.313 20:25:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:30.313 20:25:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:30.313 20:25:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:30.313 20:25:37 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:30.313 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.313 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:30.313 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:30.313 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:30.313 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.313 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.313 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:30.313 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:30.313 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:30.313 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.313 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.313 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:07:30.313 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:07:30.314 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.314 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:07:30.314 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:07:30.314 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.314 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:07:30.314 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:07:30.314 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.314 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:30.314 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:30.314 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:30.314 20:25:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:30.314 20:25:37 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:30.314 20:25:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:30.314 20:25:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:30.314 20:25:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:30.314 20:25:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.314 20:25:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:30.314 20:25:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:30.314 20:25:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:30.314 20:25:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:30.314 No valid GPT data, bailing 00:07:30.314 20:25:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # pt= 00:07:30.314 20:25:38 -- scripts/common.sh@395 -- # return 1 00:07:30.314 20:25:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:30.314 1+0 records in 00:07:30.314 1+0 records out 00:07:30.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155259 s, 67.5 MB/s 00:07:30.314 20:25:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.314 20:25:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:30.314 20:25:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:30.314 20:25:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:30.314 20:25:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:30.314 No valid GPT data, bailing 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # pt= 00:07:30.314 20:25:38 -- scripts/common.sh@395 -- # return 1 00:07:30.314 20:25:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:30.314 1+0 records in 00:07:30.314 1+0 records out 00:07:30.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639074 s, 164 MB/s 00:07:30.314 20:25:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.314 20:25:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:30.314 20:25:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:30.314 20:25:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:30.314 20:25:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:30.314 No valid GPT data, bailing 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # pt= 00:07:30.314 20:25:38 -- scripts/common.sh@395 -- # return 1 00:07:30.314 20:25:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:30.314 1+0 records in 00:07:30.314 1+0 records out 00:07:30.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458922 s, 228 MB/s 00:07:30.314 20:25:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.314 20:25:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:30.314 20:25:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:30.314 20:25:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:30.314 20:25:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:30.314 No valid GPT data, bailing 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # pt= 00:07:30.314 20:25:38 -- scripts/common.sh@395 -- # return 1 00:07:30.314 20:25:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:30.314 1+0 records in 00:07:30.314 1+0 records out 00:07:30.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439473 s, 239 MB/s 00:07:30.314 20:25:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.314 20:25:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:30.314 20:25:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:30.314 20:25:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:30.314 20:25:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:30.314 No valid GPT data, bailing 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # pt= 00:07:30.314 20:25:38 -- scripts/common.sh@395 -- # return 1 00:07:30.314 20:25:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:30.314 1+0 records in 00:07:30.314 1+0 records out 00:07:30.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667743 s, 157 MB/s 00:07:30.314 20:25:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:30.314 20:25:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:30.314 20:25:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:30.314 20:25:38 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:30.314 20:25:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:30.314 No valid GPT data, bailing 00:07:30.314 20:25:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:30.573 20:25:38 -- scripts/common.sh@394 -- # pt= 00:07:30.573 20:25:38 -- scripts/common.sh@395 -- # return 1 00:07:30.573 20:25:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:30.573 1+0 records in 00:07:30.573 1+0 records out 00:07:30.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665537 s, 158 MB/s 00:07:30.573 20:25:38 -- spdk/autotest.sh@105 -- # sync 00:07:30.573 20:25:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:30.573 20:25:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:30.573 20:25:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:33.914 20:25:41 -- spdk/autotest.sh@111 -- # uname -s 00:07:33.914 20:25:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:33.914 20:25:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:33.914 20:25:41 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:34.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.049 Hugepages 00:07:35.049 node hugesize free / total 00:07:35.049 node0 1048576kB 0 / 0 00:07:35.049 node0 2048kB 0 / 0 00:07:35.049 00:07:35.049 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:35.049 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:35.308 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:35.309 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:35.309 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:35.567 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:07:35.567 20:25:43 -- spdk/autotest.sh@117 -- # uname -s 00:07:35.567 20:25:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:35.567 20:25:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:35.567 20:25:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.106 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.106 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.106 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.106 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.400 20:25:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:38.337 20:25:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:38.337 20:25:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:38.337 20:25:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:38.337 20:25:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:38.337 20:25:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:38.337 20:25:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:38.337 20:25:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.337 20:25:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.337 20:25:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:38.337 20:25:46 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:38.337 20:25:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:38.337 20:25:46 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.166 Waiting for block devices as requested 00:07:39.166 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.426 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.426 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.426 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.700 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:44.700 20:25:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:44.700 20:25:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:44.700 20:25:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:44.700 20:25:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:44.700 20:25:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:44.700 20:25:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1543 -- # continue 00:07:44.700 20:25:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:44.700 20:25:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:44.700 20:25:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:44.700 20:25:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1543 -- # continue 00:07:44.700 20:25:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:44.700 20:25:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:44.700 20:25:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:44.700 20:25:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:44.700 20:25:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:44.700 20:25:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1543 -- # continue 00:07:44.700 20:25:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:44.700 20:25:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:44.700 20:25:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:44.700 20:25:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:44.700 20:25:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:44.701 20:25:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:44.701 20:25:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:44.701 20:25:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:44.701 20:25:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:44.701 20:25:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:44.701 20:25:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:44.701 20:25:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:44.701 20:25:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:44.701 20:25:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:44.701 20:25:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:44.701 20:25:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:44.701 20:25:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:44.701 20:25:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:44.701 20:25:52 -- common/autotest_common.sh@1543 -- # continue 00:07:44.701 20:25:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:44.960 20:25:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.960 20:25:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 20:25:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:44.960 20:25:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.960 20:25:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 20:25:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.466 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.466 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.466 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.466 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.466 20:25:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:46.466 20:25:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.466 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.466 20:25:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:46.466 20:25:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:46.466 20:25:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:46.466 20:25:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:46.466 20:25:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:46.466 20:25:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:46.466 20:25:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:46.466 20:25:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:46.466 20:25:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:46.466 20:25:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:46.466 20:25:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:46.466 20:25:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.466 20:25:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:46.726 20:25:54 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:46.726 20:25:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.726 20:25:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:46.726 20:25:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.726 20:25:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:46.726 20:25:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.726 20:25:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:46.726 20:25:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.726 20:25:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:46.726 20:25:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:46.726 20:25:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.726 20:25:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:46.726 20:25:54 -- common/autotest_common.sh@1572 -- # return 0 00:07:46.726 20:25:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:46.726 20:25:54 -- common/autotest_common.sh@1580 -- # return 0 00:07:46.726 20:25:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:46.726 20:25:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:46.726 20:25:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:46.726 20:25:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:46.726 20:25:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:46.726 20:25:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.726 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.726 20:25:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:46.726 20:25:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:46.726 20:25:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.726 20:25:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.726 20:25:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.726 ************************************ 00:07:46.726 START TEST env 00:07:46.726 ************************************ 00:07:46.726 20:25:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:46.986 * Looking for test storage... 00:07:46.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.986 20:25:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.986 20:25:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.986 20:25:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.986 20:25:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.986 20:25:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.986 20:25:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.986 20:25:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.986 20:25:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.986 20:25:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.986 20:25:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.986 20:25:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.986 20:25:54 env -- scripts/common.sh@344 -- # case "$op" in 00:07:46.986 20:25:54 env -- scripts/common.sh@345 -- # : 1 00:07:46.986 20:25:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.986 20:25:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.986 20:25:54 env -- scripts/common.sh@365 -- # decimal 1 00:07:46.986 20:25:54 env -- scripts/common.sh@353 -- # local d=1 00:07:46.986 20:25:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.986 20:25:54 env -- scripts/common.sh@355 -- # echo 1 00:07:46.986 20:25:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.986 20:25:54 env -- scripts/common.sh@366 -- # decimal 2 00:07:46.986 20:25:54 env -- scripts/common.sh@353 -- # local d=2 00:07:46.986 20:25:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.986 20:25:54 env -- scripts/common.sh@355 -- # echo 2 00:07:46.986 20:25:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.986 20:25:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.986 20:25:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.986 20:25:54 env -- scripts/common.sh@368 -- # return 0 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.986 --rc genhtml_branch_coverage=1 00:07:46.986 --rc genhtml_function_coverage=1 00:07:46.986 --rc genhtml_legend=1 00:07:46.986 --rc geninfo_all_blocks=1 00:07:46.986 --rc geninfo_unexecuted_blocks=1 00:07:46.986 00:07:46.986 ' 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.986 --rc genhtml_branch_coverage=1 00:07:46.986 --rc genhtml_function_coverage=1 00:07:46.986 --rc genhtml_legend=1 00:07:46.986 --rc geninfo_all_blocks=1 00:07:46.986 --rc geninfo_unexecuted_blocks=1 00:07:46.986 00:07:46.986 ' 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.986 --rc genhtml_branch_coverage=1 00:07:46.986 --rc genhtml_function_coverage=1 00:07:46.986 --rc genhtml_legend=1 00:07:46.986 --rc geninfo_all_blocks=1 00:07:46.986 --rc geninfo_unexecuted_blocks=1 00:07:46.986 00:07:46.986 ' 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.986 --rc genhtml_branch_coverage=1 00:07:46.986 --rc genhtml_function_coverage=1 00:07:46.986 --rc genhtml_legend=1 00:07:46.986 --rc geninfo_all_blocks=1 00:07:46.986 --rc geninfo_unexecuted_blocks=1 00:07:46.986 00:07:46.986 ' 00:07:46.986 20:25:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.986 20:25:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.986 20:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.986 ************************************ 00:07:46.986 START TEST env_memory 00:07:46.986 ************************************ 00:07:46.986 20:25:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:46.986 00:07:46.986 00:07:46.986 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.986 http://cunit.sourceforge.net/ 00:07:46.986 00:07:46.986 00:07:46.986 Suite: memory 00:07:46.986 Test: alloc and free memory map ...[2024-11-25 20:25:55.062501] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:46.986 passed 00:07:46.986 Test: mem map translation ...[2024-11-25 20:25:55.112880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:46.986 [2024-11-25 20:25:55.112962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:46.986 [2024-11-25 20:25:55.113029] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:46.986 [2024-11-25 20:25:55.113054] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:47.245 passed 00:07:47.245 Test: mem map registration ...[2024-11-25 20:25:55.183534] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:47.245 [2024-11-25 20:25:55.183625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:47.245 passed 00:07:47.245 Test: mem map adjacent registrations ...passed 00:07:47.245 00:07:47.245 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.245 suites 1 1 n/a 0 0 00:07:47.245 tests 4 4 4 0 0 00:07:47.245 asserts 152 152 152 0 n/a 00:07:47.245 00:07:47.245 Elapsed time = 0.250 seconds 00:07:47.245 00:07:47.245 real 0m0.303s 00:07:47.245 user 0m0.263s 00:07:47.245 sys 0m0.031s 00:07:47.245 20:25:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.245 20:25:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:47.245 ************************************ 00:07:47.245 END TEST env_memory 00:07:47.245 ************************************ 00:07:47.246 20:25:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:47.246 20:25:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.246 20:25:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.246 20:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:47.246 ************************************ 00:07:47.246 START TEST env_vtophys 00:07:47.246 ************************************ 00:07:47.246 20:25:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:47.504 EAL: lib.eal log level changed from notice to debug 00:07:47.504 EAL: Detected lcore 0 as core 0 on socket 0 00:07:47.504 EAL: Detected lcore 1 as core 0 on socket 0 00:07:47.504 EAL: Detected lcore 2 as core 0 on socket 0 00:07:47.504 EAL: Detected lcore 3 as core 0 on socket 0 00:07:47.504 EAL: Detected lcore 4 as core 0 on socket 0 00:07:47.505 EAL: Detected lcore 5 as core 0 on socket 0 00:07:47.505 EAL: Detected lcore 6 as core 0 on socket 0 00:07:47.505 EAL: Detected lcore 7 as core 0 on socket 0 00:07:47.505 EAL: Detected lcore 8 as core 0 on socket 0 00:07:47.505 EAL: Detected lcore 9 as core 0 on socket 0 00:07:47.505 EAL: Maximum logical cores by configuration: 128 00:07:47.505 EAL: Detected CPU lcores: 10 00:07:47.505 EAL: Detected NUMA nodes: 1 00:07:47.505 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:47.505 EAL: Detected shared linkage of DPDK 00:07:47.505 EAL: No shared files mode enabled, IPC will be disabled 00:07:47.505 EAL: Selected IOVA mode 'PA' 00:07:47.505 EAL: Probing VFIO support... 00:07:47.505 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:47.505 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:47.505 EAL: Ask a virtual area of 0x2e000 bytes 00:07:47.505 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:47.505 EAL: Setting up physically contiguous memory... 00:07:47.505 EAL: Setting maximum number of open files to 524288 00:07:47.505 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:47.505 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:47.505 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.505 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:47.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.505 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.505 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:47.505 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:47.505 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.505 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:47.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.505 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.505 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:47.505 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:47.505 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.505 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:47.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.505 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.505 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:47.505 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:47.505 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.505 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:47.505 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.505 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.505 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:47.505 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:47.505 EAL: Hugepages will be freed exactly as allocated. 00:07:47.505 EAL: No shared files mode enabled, IPC is disabled 00:07:47.505 EAL: No shared files mode enabled, IPC is disabled 00:07:47.505 EAL: TSC frequency is ~2490000 KHz 00:07:47.505 EAL: Main lcore 0 is ready (tid=7f1d7e82ca40;cpuset=[0]) 00:07:47.505 EAL: Trying to obtain current memory policy. 00:07:47.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.505 EAL: Restoring previous memory policy: 0 00:07:47.505 EAL: request: mp_malloc_sync 00:07:47.505 EAL: No shared files mode enabled, IPC is disabled 00:07:47.505 EAL: Heap on socket 0 was expanded by 2MB 00:07:47.505 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:47.505 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:47.505 EAL: Mem event callback 'spdk:(nil)' registered 00:07:47.505 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:47.505 00:07:47.505 00:07:47.505 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.505 http://cunit.sourceforge.net/ 00:07:47.505 00:07:47.505 00:07:47.505 Suite: components_suite 00:07:48.073 Test: vtophys_malloc_test ...passed 00:07:48.073 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:48.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.073 EAL: Restoring previous memory policy: 4 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was expanded by 4MB 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was shrunk by 4MB 00:07:48.073 EAL: Trying to obtain current memory policy. 00:07:48.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.073 EAL: Restoring previous memory policy: 4 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was expanded by 6MB 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was shrunk by 6MB 00:07:48.073 EAL: Trying to obtain current memory policy. 00:07:48.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.073 EAL: Restoring previous memory policy: 4 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was expanded by 10MB 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was shrunk by 10MB 00:07:48.073 EAL: Trying to obtain current memory policy. 00:07:48.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.073 EAL: Restoring previous memory policy: 4 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was expanded by 18MB 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was shrunk by 18MB 00:07:48.073 EAL: Trying to obtain current memory policy. 00:07:48.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.073 EAL: Restoring previous memory policy: 4 00:07:48.073 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.073 EAL: request: mp_malloc_sync 00:07:48.073 EAL: No shared files mode enabled, IPC is disabled 00:07:48.073 EAL: Heap on socket 0 was expanded by 34MB 00:07:48.330 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.330 EAL: request: mp_malloc_sync 00:07:48.330 EAL: No shared files mode enabled, IPC is disabled 00:07:48.330 EAL: Heap on socket 0 was shrunk by 34MB 00:07:48.330 EAL: Trying to obtain current memory policy. 00:07:48.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.330 EAL: Restoring previous memory policy: 4 00:07:48.330 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.330 EAL: request: mp_malloc_sync 00:07:48.330 EAL: No shared files mode enabled, IPC is disabled 00:07:48.330 EAL: Heap on socket 0 was expanded by 66MB 00:07:48.330 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.330 EAL: request: mp_malloc_sync 00:07:48.330 EAL: No shared files mode enabled, IPC is disabled 00:07:48.330 EAL: Heap on socket 0 was shrunk by 66MB 00:07:48.588 EAL: Trying to obtain current memory policy. 00:07:48.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.588 EAL: Restoring previous memory policy: 4 00:07:48.588 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.588 EAL: request: mp_malloc_sync 00:07:48.588 EAL: No shared files mode enabled, IPC is disabled 00:07:48.588 EAL: Heap on socket 0 was expanded by 130MB 00:07:48.846 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.846 EAL: request: mp_malloc_sync 00:07:48.846 EAL: No shared files mode enabled, IPC is disabled 00:07:48.846 EAL: Heap on socket 0 was shrunk by 130MB 00:07:49.104 EAL: Trying to obtain current memory policy. 00:07:49.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.104 EAL: Restoring previous memory policy: 4 00:07:49.104 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.104 EAL: request: mp_malloc_sync 00:07:49.104 EAL: No shared files mode enabled, IPC is disabled 00:07:49.104 EAL: Heap on socket 0 was expanded by 258MB 00:07:49.672 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.672 EAL: request: mp_malloc_sync 00:07:49.672 EAL: No shared files mode enabled, IPC is disabled 00:07:49.672 EAL: Heap on socket 0 was shrunk by 258MB 00:07:49.931 EAL: Trying to obtain current memory policy. 00:07:49.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.190 EAL: Restoring previous memory policy: 4 00:07:50.190 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.190 EAL: request: mp_malloc_sync 00:07:50.190 EAL: No shared files mode enabled, IPC is disabled 00:07:50.190 EAL: Heap on socket 0 was expanded by 514MB 00:07:51.127 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.127 EAL: request: mp_malloc_sync 00:07:51.127 EAL: No shared files mode enabled, IPC is disabled 00:07:51.127 EAL: Heap on socket 0 was shrunk by 514MB 00:07:52.065 EAL: Trying to obtain current memory policy. 00:07:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:52.065 EAL: Restoring previous memory policy: 4 00:07:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:07:52.065 EAL: request: mp_malloc_sync 00:07:52.065 EAL: No shared files mode enabled, IPC is disabled 00:07:52.065 EAL: Heap on socket 0 was expanded by 1026MB 00:07:53.992 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.250 EAL: request: mp_malloc_sync 00:07:54.250 EAL: No shared files mode enabled, IPC is disabled 00:07:54.250 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:56.155 passed 00:07:56.155 00:07:56.155 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.155 suites 1 1 n/a 0 0 00:07:56.155 tests 2 2 2 0 0 00:07:56.155 asserts 5754 5754 5754 0 n/a 00:07:56.155 00:07:56.155 Elapsed time = 8.305 seconds 00:07:56.155 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.155 EAL: request: mp_malloc_sync 00:07:56.155 EAL: No shared files mode enabled, IPC is disabled 00:07:56.155 EAL: Heap on socket 0 was shrunk by 2MB 00:07:56.155 EAL: No shared files mode enabled, IPC is disabled 00:07:56.155 EAL: No shared files mode enabled, IPC is disabled 00:07:56.155 EAL: No shared files mode enabled, IPC is disabled 00:07:56.155 00:07:56.155 real 0m8.654s 00:07:56.155 user 0m7.615s 00:07:56.155 sys 0m0.873s 00:07:56.155 20:26:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.155 ************************************ 00:07:56.155 END TEST env_vtophys 00:07:56.155 ************************************ 00:07:56.155 20:26:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:56.155 20:26:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:56.155 20:26:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.155 20:26:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.155 20:26:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.155 ************************************ 00:07:56.155 START TEST env_pci 00:07:56.155 ************************************ 00:07:56.155 20:26:04 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:56.155 00:07:56.155 00:07:56.155 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.155 http://cunit.sourceforge.net/ 00:07:56.155 00:07:56.155 00:07:56.155 Suite: pci 00:07:56.155 Test: pci_hook ...[2024-11-25 20:26:04.140315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57828 has claimed it 00:07:56.155 passed 00:07:56.155 00:07:56.155 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.155 suites 1 1 n/a 0 0 00:07:56.155 tests 1 1 1 0 0 00:07:56.155 asserts 25 25 25 0 n/a 00:07:56.155 00:07:56.155 Elapsed time = 0.012 seconds 00:07:56.155 EAL: Cannot find device (10000:00:01.0) 00:07:56.155 EAL: Failed to attach device on primary process 00:07:56.155 00:07:56.155 real 0m0.123s 00:07:56.155 user 0m0.054s 00:07:56.155 sys 0m0.068s 00:07:56.155 20:26:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.155 20:26:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:56.155 ************************************ 00:07:56.155 END TEST env_pci 00:07:56.155 ************************************ 00:07:56.155 20:26:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:56.155 20:26:04 env -- env/env.sh@15 -- # uname 00:07:56.155 20:26:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:56.155 20:26:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:56.155 20:26:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:56.155 20:26:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.155 20:26:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.415 20:26:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.415 ************************************ 00:07:56.415 START TEST env_dpdk_post_init 00:07:56.415 ************************************ 00:07:56.415 20:26:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:56.415 EAL: Detected CPU lcores: 10 00:07:56.415 EAL: Detected NUMA nodes: 1 00:07:56.415 EAL: Detected shared linkage of DPDK 00:07:56.415 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:56.415 EAL: Selected IOVA mode 'PA' 00:07:56.415 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:56.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:56.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:56.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:56.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:56.674 Starting DPDK initialization... 00:07:56.674 Starting SPDK post initialization... 00:07:56.674 SPDK NVMe probe 00:07:56.674 Attaching to 0000:00:10.0 00:07:56.674 Attaching to 0000:00:11.0 00:07:56.674 Attaching to 0000:00:12.0 00:07:56.674 Attaching to 0000:00:13.0 00:07:56.674 Attached to 0000:00:10.0 00:07:56.674 Attached to 0000:00:11.0 00:07:56.674 Attached to 0000:00:13.0 00:07:56.674 Attached to 0000:00:12.0 00:07:56.674 Cleaning up... 00:07:56.674 00:07:56.674 real 0m0.361s 00:07:56.674 user 0m0.116s 00:07:56.674 sys 0m0.148s 00:07:56.674 20:26:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.674 ************************************ 00:07:56.674 END TEST env_dpdk_post_init 00:07:56.674 20:26:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:56.674 ************************************ 00:07:56.674 20:26:04 env -- env/env.sh@26 -- # uname 00:07:56.674 20:26:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:56.674 20:26:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:56.674 20:26:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.674 20:26:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.674 20:26:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.675 ************************************ 00:07:56.675 START TEST env_mem_callbacks 00:07:56.675 ************************************ 00:07:56.675 20:26:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:56.934 EAL: Detected CPU lcores: 10 00:07:56.934 EAL: Detected NUMA nodes: 1 00:07:56.934 EAL: Detected shared linkage of DPDK 00:07:56.934 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:56.934 EAL: Selected IOVA mode 'PA' 00:07:56.934 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:56.934 00:07:56.934 00:07:56.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.934 http://cunit.sourceforge.net/ 00:07:56.934 00:07:56.934 00:07:56.934 Suite: memory 00:07:56.934 Test: test ... 00:07:56.934 register 0x200000200000 2097152 00:07:56.934 malloc 3145728 00:07:56.934 register 0x200000400000 4194304 00:07:56.934 buf 0x2000004fffc0 len 3145728 PASSED 00:07:56.934 malloc 64 00:07:56.934 buf 0x2000004ffec0 len 64 PASSED 00:07:56.934 malloc 4194304 00:07:56.934 register 0x200000800000 6291456 00:07:56.934 buf 0x2000009fffc0 len 4194304 PASSED 00:07:56.934 free 0x2000004fffc0 3145728 00:07:56.934 free 0x2000004ffec0 64 00:07:56.934 unregister 0x200000400000 4194304 PASSED 00:07:56.934 free 0x2000009fffc0 4194304 00:07:56.934 unregister 0x200000800000 6291456 PASSED 00:07:56.934 malloc 8388608 00:07:56.934 register 0x200000400000 10485760 00:07:56.934 buf 0x2000005fffc0 len 8388608 PASSED 00:07:56.934 free 0x2000005fffc0 8388608 00:07:56.934 unregister 0x200000400000 10485760 PASSED 00:07:56.934 passed 00:07:56.934 00:07:56.934 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.934 suites 1 1 n/a 0 0 00:07:56.934 tests 1 1 1 0 0 00:07:56.934 asserts 15 15 15 0 n/a 00:07:56.934 00:07:56.934 Elapsed time = 0.083 seconds 00:07:57.193 00:07:57.193 real 0m0.322s 00:07:57.193 user 0m0.129s 00:07:57.193 sys 0m0.090s 00:07:57.193 20:26:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.193 20:26:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 END TEST env_mem_callbacks 00:07:57.193 ************************************ 00:07:57.193 00:07:57.193 real 0m10.395s 00:07:57.193 user 0m8.422s 00:07:57.193 sys 0m1.597s 00:07:57.193 20:26:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.193 20:26:05 env -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 END TEST env 00:07:57.193 ************************************ 00:07:57.193 20:26:05 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:57.193 20:26:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.193 20:26:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.193 20:26:05 -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 START TEST rpc 00:07:57.193 ************************************ 00:07:57.193 20:26:05 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:57.451 * Looking for test storage... 00:07:57.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.451 20:26:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.451 20:26:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.451 20:26:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.451 20:26:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.451 20:26:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.451 20:26:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:57.451 20:26:05 rpc -- scripts/common.sh@345 -- # : 1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.451 20:26:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.451 20:26:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@353 -- # local d=1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.451 20:26:05 rpc -- scripts/common.sh@355 -- # echo 1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.451 20:26:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@353 -- # local d=2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.451 20:26:05 rpc -- scripts/common.sh@355 -- # echo 2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.451 20:26:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.451 20:26:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.451 20:26:05 rpc -- scripts/common.sh@368 -- # return 0 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.451 --rc genhtml_branch_coverage=1 00:07:57.451 --rc genhtml_function_coverage=1 00:07:57.451 --rc genhtml_legend=1 00:07:57.451 --rc geninfo_all_blocks=1 00:07:57.451 --rc geninfo_unexecuted_blocks=1 00:07:57.451 00:07:57.451 ' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.451 --rc genhtml_branch_coverage=1 00:07:57.451 --rc genhtml_function_coverage=1 00:07:57.451 --rc genhtml_legend=1 00:07:57.451 --rc geninfo_all_blocks=1 00:07:57.451 --rc geninfo_unexecuted_blocks=1 00:07:57.451 00:07:57.451 ' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.451 --rc genhtml_branch_coverage=1 00:07:57.451 --rc genhtml_function_coverage=1 00:07:57.451 --rc genhtml_legend=1 00:07:57.451 --rc geninfo_all_blocks=1 00:07:57.451 --rc geninfo_unexecuted_blocks=1 00:07:57.451 00:07:57.451 ' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.451 --rc genhtml_branch_coverage=1 00:07:57.451 --rc genhtml_function_coverage=1 00:07:57.451 --rc genhtml_legend=1 00:07:57.451 --rc geninfo_all_blocks=1 00:07:57.451 --rc geninfo_unexecuted_blocks=1 00:07:57.451 00:07:57.451 ' 00:07:57.451 20:26:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57961 00:07:57.451 20:26:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:57.451 20:26:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.451 20:26:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57961 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 57961 ']' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.451 20:26:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.451 [2024-11-25 20:26:05.560993] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:07:57.451 [2024-11-25 20:26:05.561138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57961 ] 00:07:57.710 [2024-11-25 20:26:05.747013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.978 [2024-11-25 20:26:05.858978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:57.978 [2024-11-25 20:26:05.859060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57961' to capture a snapshot of events at runtime. 00:07:57.978 [2024-11-25 20:26:05.859077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.978 [2024-11-25 20:26:05.859094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.978 [2024-11-25 20:26:05.859107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57961 for offline analysis/debug. 00:07:57.978 [2024-11-25 20:26:05.860563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.926 20:26:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.926 20:26:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:58.926 20:26:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:58.926 20:26:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:58.926 20:26:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:58.926 20:26:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:58.926 20:26:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.926 20:26:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.926 20:26:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.926 ************************************ 00:07:58.926 START TEST rpc_integrity 00:07:58.926 ************************************ 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.926 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.926 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:58.926 { 00:07:58.926 "name": "Malloc0", 00:07:58.926 "aliases": [ 00:07:58.926 "0c6d3cc8-2245-403c-b28a-7d4f78fe3700" 00:07:58.926 ], 00:07:58.926 "product_name": "Malloc disk", 00:07:58.926 "block_size": 512, 00:07:58.926 "num_blocks": 16384, 00:07:58.926 "uuid": "0c6d3cc8-2245-403c-b28a-7d4f78fe3700", 00:07:58.926 "assigned_rate_limits": { 00:07:58.926 "rw_ios_per_sec": 0, 00:07:58.927 "rw_mbytes_per_sec": 0, 00:07:58.927 "r_mbytes_per_sec": 0, 00:07:58.927 "w_mbytes_per_sec": 0 00:07:58.927 }, 00:07:58.927 "claimed": false, 00:07:58.927 "zoned": false, 00:07:58.927 "supported_io_types": { 00:07:58.927 "read": true, 00:07:58.927 "write": true, 00:07:58.927 "unmap": true, 00:07:58.927 "flush": true, 00:07:58.927 "reset": true, 00:07:58.927 "nvme_admin": false, 00:07:58.927 "nvme_io": false, 00:07:58.927 "nvme_io_md": false, 00:07:58.927 "write_zeroes": true, 00:07:58.927 "zcopy": true, 00:07:58.927 "get_zone_info": false, 00:07:58.927 "zone_management": false, 00:07:58.927 "zone_append": false, 00:07:58.927 "compare": false, 00:07:58.927 "compare_and_write": false, 00:07:58.927 "abort": true, 00:07:58.927 "seek_hole": false, 00:07:58.927 "seek_data": false, 00:07:58.927 "copy": true, 00:07:58.927 "nvme_iov_md": false 00:07:58.927 }, 00:07:58.927 "memory_domains": [ 00:07:58.927 { 00:07:58.927 "dma_device_id": "system", 00:07:58.927 "dma_device_type": 1 00:07:58.927 }, 00:07:58.927 { 00:07:58.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.927 "dma_device_type": 2 00:07:58.927 } 00:07:58.927 ], 00:07:58.927 "driver_specific": {} 00:07:58.927 } 00:07:58.927 ]' 00:07:58.927 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:58.927 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:58.927 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:58.927 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.927 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.927 [2024-11-25 20:26:06.927480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:58.927 [2024-11-25 20:26:06.927553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.927 [2024-11-25 20:26:06.927611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.927 [2024-11-25 20:26:06.927629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.927 [2024-11-25 20:26:06.930148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.927 [2024-11-25 20:26:06.930200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:58.927 Passthru0 00:07:58.927 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.927 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:58.927 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.927 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.927 20:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.927 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:58.927 { 00:07:58.927 "name": "Malloc0", 00:07:58.927 "aliases": [ 00:07:58.927 "0c6d3cc8-2245-403c-b28a-7d4f78fe3700" 00:07:58.927 ], 00:07:58.927 "product_name": "Malloc disk", 00:07:58.927 "block_size": 512, 00:07:58.927 "num_blocks": 16384, 00:07:58.927 "uuid": "0c6d3cc8-2245-403c-b28a-7d4f78fe3700", 00:07:58.927 "assigned_rate_limits": { 00:07:58.927 "rw_ios_per_sec": 0, 00:07:58.927 "rw_mbytes_per_sec": 0, 00:07:58.927 "r_mbytes_per_sec": 0, 00:07:58.927 "w_mbytes_per_sec": 0 00:07:58.927 }, 00:07:58.927 "claimed": true, 00:07:58.927 "claim_type": "exclusive_write", 00:07:58.927 "zoned": false, 00:07:58.927 "supported_io_types": { 00:07:58.927 "read": true, 00:07:58.927 "write": true, 00:07:58.927 "unmap": true, 00:07:58.927 "flush": true, 00:07:58.927 "reset": true, 00:07:58.927 "nvme_admin": false, 00:07:58.927 "nvme_io": false, 00:07:58.927 "nvme_io_md": false, 00:07:58.927 "write_zeroes": true, 00:07:58.927 "zcopy": true, 00:07:58.927 "get_zone_info": false, 00:07:58.927 "zone_management": false, 00:07:58.927 "zone_append": false, 00:07:58.927 "compare": false, 00:07:58.927 "compare_and_write": false, 00:07:58.927 "abort": true, 00:07:58.927 "seek_hole": false, 00:07:58.927 "seek_data": false, 00:07:58.927 "copy": true, 00:07:58.927 "nvme_iov_md": false 00:07:58.927 }, 00:07:58.927 "memory_domains": [ 00:07:58.927 { 00:07:58.927 "dma_device_id": "system", 00:07:58.927 "dma_device_type": 1 00:07:58.927 }, 00:07:58.927 { 00:07:58.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.927 "dma_device_type": 2 00:07:58.927 } 00:07:58.927 ], 00:07:58.927 "driver_specific": {} 00:07:58.927 }, 00:07:58.927 { 00:07:58.927 "name": "Passthru0", 00:07:58.927 "aliases": [ 00:07:58.927 "37ef8a4c-89fe-5f3b-8149-b4fa9110dc29" 00:07:58.927 ], 00:07:58.927 "product_name": "passthru", 00:07:58.927 "block_size": 512, 00:07:58.927 "num_blocks": 16384, 00:07:58.927 "uuid": "37ef8a4c-89fe-5f3b-8149-b4fa9110dc29", 00:07:58.927 "assigned_rate_limits": { 00:07:58.927 "rw_ios_per_sec": 0, 00:07:58.927 "rw_mbytes_per_sec": 0, 00:07:58.927 "r_mbytes_per_sec": 0, 00:07:58.927 "w_mbytes_per_sec": 0 00:07:58.927 }, 00:07:58.927 "claimed": false, 00:07:58.927 "zoned": false, 00:07:58.927 "supported_io_types": { 00:07:58.927 "read": true, 00:07:58.927 "write": true, 00:07:58.927 "unmap": true, 00:07:58.927 "flush": true, 00:07:58.927 "reset": true, 00:07:58.927 "nvme_admin": false, 00:07:58.927 "nvme_io": false, 00:07:58.927 "nvme_io_md": false, 00:07:58.927 "write_zeroes": true, 00:07:58.927 "zcopy": true, 00:07:58.927 "get_zone_info": false, 00:07:58.927 "zone_management": false, 00:07:58.927 "zone_append": false, 00:07:58.927 "compare": false, 00:07:58.927 "compare_and_write": false, 00:07:58.927 "abort": true, 00:07:58.927 "seek_hole": false, 00:07:58.927 "seek_data": false, 00:07:58.927 "copy": true, 00:07:58.927 "nvme_iov_md": false 00:07:58.927 }, 00:07:58.927 "memory_domains": [ 00:07:58.927 { 00:07:58.927 "dma_device_id": "system", 00:07:58.927 "dma_device_type": 1 00:07:58.927 }, 00:07:58.927 { 00:07:58.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.927 "dma_device_type": 2 00:07:58.927 } 00:07:58.927 ], 00:07:58.927 "driver_specific": { 00:07:58.927 "passthru": { 00:07:58.927 "name": "Passthru0", 00:07:58.927 "base_bdev_name": "Malloc0" 00:07:58.927 } 00:07:58.927 } 00:07:58.927 } 00:07:58.927 ]' 00:07:58.927 20:26:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:58.927 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:58.927 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.927 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.927 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.927 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.187 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.187 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:59.187 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:59.187 20:26:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:59.187 00:07:59.187 real 0m0.354s 00:07:59.187 user 0m0.177s 00:07:59.187 sys 0m0.069s 00:07:59.187 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.187 20:26:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.187 ************************************ 00:07:59.187 END TEST rpc_integrity 00:07:59.187 ************************************ 00:07:59.187 20:26:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:59.187 20:26:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.187 20:26:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.187 20:26:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.187 ************************************ 00:07:59.187 START TEST rpc_plugins 00:07:59.187 ************************************ 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:59.187 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.187 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:59.187 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:59.187 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.187 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:59.187 { 00:07:59.187 "name": "Malloc1", 00:07:59.187 "aliases": [ 00:07:59.187 "78613578-a373-4483-9502-3ac1df91955e" 00:07:59.187 ], 00:07:59.187 "product_name": "Malloc disk", 00:07:59.187 "block_size": 4096, 00:07:59.187 "num_blocks": 256, 00:07:59.187 "uuid": "78613578-a373-4483-9502-3ac1df91955e", 00:07:59.187 "assigned_rate_limits": { 00:07:59.187 "rw_ios_per_sec": 0, 00:07:59.187 "rw_mbytes_per_sec": 0, 00:07:59.187 "r_mbytes_per_sec": 0, 00:07:59.187 "w_mbytes_per_sec": 0 00:07:59.187 }, 00:07:59.187 "claimed": false, 00:07:59.187 "zoned": false, 00:07:59.187 "supported_io_types": { 00:07:59.187 "read": true, 00:07:59.187 "write": true, 00:07:59.187 "unmap": true, 00:07:59.187 "flush": true, 00:07:59.187 "reset": true, 00:07:59.187 "nvme_admin": false, 00:07:59.187 "nvme_io": false, 00:07:59.187 "nvme_io_md": false, 00:07:59.187 "write_zeroes": true, 00:07:59.187 "zcopy": true, 00:07:59.187 "get_zone_info": false, 00:07:59.187 "zone_management": false, 00:07:59.187 "zone_append": false, 00:07:59.187 "compare": false, 00:07:59.187 "compare_and_write": false, 00:07:59.187 "abort": true, 00:07:59.187 "seek_hole": false, 00:07:59.187 "seek_data": false, 00:07:59.187 "copy": true, 00:07:59.187 "nvme_iov_md": false 00:07:59.187 }, 00:07:59.187 "memory_domains": [ 00:07:59.187 { 00:07:59.187 "dma_device_id": "system", 00:07:59.187 "dma_device_type": 1 00:07:59.187 }, 00:07:59.187 { 00:07:59.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.187 "dma_device_type": 2 00:07:59.187 } 00:07:59.187 ], 00:07:59.187 "driver_specific": {} 00:07:59.187 } 00:07:59.187 ]' 00:07:59.187 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:59.188 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:59.188 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:59.188 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.188 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:59.188 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.188 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:59.188 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.188 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:59.188 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.188 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:59.188 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:59.447 ************************************ 00:07:59.447 END TEST rpc_plugins 00:07:59.447 ************************************ 00:07:59.447 20:26:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:59.447 00:07:59.447 real 0m0.179s 00:07:59.447 user 0m0.104s 00:07:59.447 sys 0m0.029s 00:07:59.447 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.447 20:26:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:59.447 20:26:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:59.447 20:26:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.447 20:26:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.447 20:26:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.447 ************************************ 00:07:59.447 START TEST rpc_trace_cmd_test 00:07:59.447 ************************************ 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:59.447 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57961", 00:07:59.447 "tpoint_group_mask": "0x8", 00:07:59.447 "iscsi_conn": { 00:07:59.447 "mask": "0x2", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "scsi": { 00:07:59.447 "mask": "0x4", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "bdev": { 00:07:59.447 "mask": "0x8", 00:07:59.447 "tpoint_mask": "0xffffffffffffffff" 00:07:59.447 }, 00:07:59.447 "nvmf_rdma": { 00:07:59.447 "mask": "0x10", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "nvmf_tcp": { 00:07:59.447 "mask": "0x20", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "ftl": { 00:07:59.447 "mask": "0x40", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "blobfs": { 00:07:59.447 "mask": "0x80", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "dsa": { 00:07:59.447 "mask": "0x200", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "thread": { 00:07:59.447 "mask": "0x400", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "nvme_pcie": { 00:07:59.447 "mask": "0x800", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "iaa": { 00:07:59.447 "mask": "0x1000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "nvme_tcp": { 00:07:59.447 "mask": "0x2000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "bdev_nvme": { 00:07:59.447 "mask": "0x4000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "sock": { 00:07:59.447 "mask": "0x8000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "blob": { 00:07:59.447 "mask": "0x10000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "bdev_raid": { 00:07:59.447 "mask": "0x20000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 }, 00:07:59.447 "scheduler": { 00:07:59.447 "mask": "0x40000", 00:07:59.447 "tpoint_mask": "0x0" 00:07:59.447 } 00:07:59.447 }' 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:59.447 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:59.705 ************************************ 00:07:59.705 END TEST rpc_trace_cmd_test 00:07:59.705 ************************************ 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:59.705 00:07:59.705 real 0m0.241s 00:07:59.705 user 0m0.184s 00:07:59.705 sys 0m0.047s 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.705 20:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.705 20:26:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:59.705 20:26:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:59.705 20:26:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:59.705 20:26:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.705 20:26:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.705 20:26:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.705 ************************************ 00:07:59.705 START TEST rpc_daemon_integrity 00:07:59.705 ************************************ 00:07:59.705 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:59.705 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:59.705 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.705 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.705 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.705 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.706 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:59.965 { 00:07:59.965 "name": "Malloc2", 00:07:59.965 "aliases": [ 00:07:59.965 "94f43b3d-c1f8-4a30-a5ed-137a65fa75b9" 00:07:59.965 ], 00:07:59.965 "product_name": "Malloc disk", 00:07:59.965 "block_size": 512, 00:07:59.965 "num_blocks": 16384, 00:07:59.965 "uuid": "94f43b3d-c1f8-4a30-a5ed-137a65fa75b9", 00:07:59.965 "assigned_rate_limits": { 00:07:59.965 "rw_ios_per_sec": 0, 00:07:59.965 "rw_mbytes_per_sec": 0, 00:07:59.965 "r_mbytes_per_sec": 0, 00:07:59.965 "w_mbytes_per_sec": 0 00:07:59.965 }, 00:07:59.965 "claimed": false, 00:07:59.965 "zoned": false, 00:07:59.965 "supported_io_types": { 00:07:59.965 "read": true, 00:07:59.965 "write": true, 00:07:59.965 "unmap": true, 00:07:59.965 "flush": true, 00:07:59.965 "reset": true, 00:07:59.965 "nvme_admin": false, 00:07:59.965 "nvme_io": false, 00:07:59.965 "nvme_io_md": false, 00:07:59.965 "write_zeroes": true, 00:07:59.965 "zcopy": true, 00:07:59.965 "get_zone_info": false, 00:07:59.965 "zone_management": false, 00:07:59.965 "zone_append": false, 00:07:59.965 "compare": false, 00:07:59.965 "compare_and_write": false, 00:07:59.965 "abort": true, 00:07:59.965 "seek_hole": false, 00:07:59.965 "seek_data": false, 00:07:59.965 "copy": true, 00:07:59.965 "nvme_iov_md": false 00:07:59.965 }, 00:07:59.965 "memory_domains": [ 00:07:59.965 { 00:07:59.965 "dma_device_id": "system", 00:07:59.965 "dma_device_type": 1 00:07:59.965 }, 00:07:59.965 { 00:07:59.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.965 "dma_device_type": 2 00:07:59.965 } 00:07:59.965 ], 00:07:59.965 "driver_specific": {} 00:07:59.965 } 00:07:59.965 ]' 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.965 [2024-11-25 20:26:07.894343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:59.965 [2024-11-25 20:26:07.894574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.965 [2024-11-25 20:26:07.894611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:59.965 [2024-11-25 20:26:07.894626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.965 [2024-11-25 20:26:07.897362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.965 [2024-11-25 20:26:07.897409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:59.965 Passthru0 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.965 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:59.965 { 00:07:59.965 "name": "Malloc2", 00:07:59.965 "aliases": [ 00:07:59.965 "94f43b3d-c1f8-4a30-a5ed-137a65fa75b9" 00:07:59.965 ], 00:07:59.965 "product_name": "Malloc disk", 00:07:59.965 "block_size": 512, 00:07:59.965 "num_blocks": 16384, 00:07:59.965 "uuid": "94f43b3d-c1f8-4a30-a5ed-137a65fa75b9", 00:07:59.965 "assigned_rate_limits": { 00:07:59.965 "rw_ios_per_sec": 0, 00:07:59.965 "rw_mbytes_per_sec": 0, 00:07:59.965 "r_mbytes_per_sec": 0, 00:07:59.965 "w_mbytes_per_sec": 0 00:07:59.965 }, 00:07:59.965 "claimed": true, 00:07:59.965 "claim_type": "exclusive_write", 00:07:59.965 "zoned": false, 00:07:59.965 "supported_io_types": { 00:07:59.965 "read": true, 00:07:59.965 "write": true, 00:07:59.965 "unmap": true, 00:07:59.965 "flush": true, 00:07:59.965 "reset": true, 00:07:59.965 "nvme_admin": false, 00:07:59.965 "nvme_io": false, 00:07:59.965 "nvme_io_md": false, 00:07:59.965 "write_zeroes": true, 00:07:59.965 "zcopy": true, 00:07:59.965 "get_zone_info": false, 00:07:59.965 "zone_management": false, 00:07:59.965 "zone_append": false, 00:07:59.965 "compare": false, 00:07:59.965 "compare_and_write": false, 00:07:59.965 "abort": true, 00:07:59.965 "seek_hole": false, 00:07:59.965 "seek_data": false, 00:07:59.965 "copy": true, 00:07:59.965 "nvme_iov_md": false 00:07:59.965 }, 00:07:59.965 "memory_domains": [ 00:07:59.965 { 00:07:59.965 "dma_device_id": "system", 00:07:59.965 "dma_device_type": 1 00:07:59.965 }, 00:07:59.965 { 00:07:59.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.965 "dma_device_type": 2 00:07:59.965 } 00:07:59.965 ], 00:07:59.965 "driver_specific": {} 00:07:59.965 }, 00:07:59.965 { 00:07:59.965 "name": "Passthru0", 00:07:59.965 "aliases": [ 00:07:59.965 "603df762-4f1f-5c22-a063-b09d621f15a0" 00:07:59.965 ], 00:07:59.965 "product_name": "passthru", 00:07:59.965 "block_size": 512, 00:07:59.965 "num_blocks": 16384, 00:07:59.965 "uuid": "603df762-4f1f-5c22-a063-b09d621f15a0", 00:07:59.965 "assigned_rate_limits": { 00:07:59.965 "rw_ios_per_sec": 0, 00:07:59.965 "rw_mbytes_per_sec": 0, 00:07:59.965 "r_mbytes_per_sec": 0, 00:07:59.965 "w_mbytes_per_sec": 0 00:07:59.965 }, 00:07:59.965 "claimed": false, 00:07:59.965 "zoned": false, 00:07:59.965 "supported_io_types": { 00:07:59.965 "read": true, 00:07:59.965 "write": true, 00:07:59.965 "unmap": true, 00:07:59.965 "flush": true, 00:07:59.965 "reset": true, 00:07:59.965 "nvme_admin": false, 00:07:59.965 "nvme_io": false, 00:07:59.965 "nvme_io_md": false, 00:07:59.965 "write_zeroes": true, 00:07:59.965 "zcopy": true, 00:07:59.965 "get_zone_info": false, 00:07:59.965 "zone_management": false, 00:07:59.965 "zone_append": false, 00:07:59.965 "compare": false, 00:07:59.965 "compare_and_write": false, 00:07:59.965 "abort": true, 00:07:59.965 "seek_hole": false, 00:07:59.965 "seek_data": false, 00:07:59.965 "copy": true, 00:07:59.965 "nvme_iov_md": false 00:07:59.965 }, 00:07:59.965 "memory_domains": [ 00:07:59.965 { 00:07:59.965 "dma_device_id": "system", 00:07:59.965 "dma_device_type": 1 00:07:59.965 }, 00:07:59.965 { 00:07:59.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.965 "dma_device_type": 2 00:07:59.965 } 00:07:59.965 ], 00:07:59.965 "driver_specific": { 00:07:59.966 "passthru": { 00:07:59.966 "name": "Passthru0", 00:07:59.966 "base_bdev_name": "Malloc2" 00:07:59.966 } 00:07:59.966 } 00:07:59.966 } 00:07:59.966 ]' 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.966 20:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:59.966 ************************************ 00:07:59.966 END TEST rpc_daemon_integrity 00:07:59.966 ************************************ 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:59.966 00:07:59.966 real 0m0.358s 00:07:59.966 user 0m0.201s 00:07:59.966 sys 0m0.063s 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.966 20:26:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:00.225 20:26:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:00.225 20:26:08 rpc -- rpc/rpc.sh@84 -- # killprocess 57961 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@954 -- # '[' -z 57961 ']' 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@958 -- # kill -0 57961 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@959 -- # uname 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57961 00:08:00.225 killing process with pid 57961 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57961' 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@973 -- # kill 57961 00:08:00.225 20:26:08 rpc -- common/autotest_common.sh@978 -- # wait 57961 00:08:02.764 00:08:02.764 real 0m5.416s 00:08:02.764 user 0m5.916s 00:08:02.764 sys 0m1.019s 00:08:02.764 ************************************ 00:08:02.764 END TEST rpc 00:08:02.764 ************************************ 00:08:02.764 20:26:10 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.764 20:26:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.764 20:26:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:02.764 20:26:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.764 20:26:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.764 20:26:10 -- common/autotest_common.sh@10 -- # set +x 00:08:02.764 ************************************ 00:08:02.764 START TEST skip_rpc 00:08:02.764 ************************************ 00:08:02.764 20:26:10 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:02.764 * Looking for test storage... 00:08:02.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:02.764 20:26:10 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.764 20:26:10 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.764 20:26:10 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.022 20:26:10 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.023 20:26:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.023 --rc genhtml_branch_coverage=1 00:08:03.023 --rc genhtml_function_coverage=1 00:08:03.023 --rc genhtml_legend=1 00:08:03.023 --rc geninfo_all_blocks=1 00:08:03.023 --rc geninfo_unexecuted_blocks=1 00:08:03.023 00:08:03.023 ' 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.023 --rc genhtml_branch_coverage=1 00:08:03.023 --rc genhtml_function_coverage=1 00:08:03.023 --rc genhtml_legend=1 00:08:03.023 --rc geninfo_all_blocks=1 00:08:03.023 --rc geninfo_unexecuted_blocks=1 00:08:03.023 00:08:03.023 ' 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.023 --rc genhtml_branch_coverage=1 00:08:03.023 --rc genhtml_function_coverage=1 00:08:03.023 --rc genhtml_legend=1 00:08:03.023 --rc geninfo_all_blocks=1 00:08:03.023 --rc geninfo_unexecuted_blocks=1 00:08:03.023 00:08:03.023 ' 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.023 --rc genhtml_branch_coverage=1 00:08:03.023 --rc genhtml_function_coverage=1 00:08:03.023 --rc genhtml_legend=1 00:08:03.023 --rc geninfo_all_blocks=1 00:08:03.023 --rc geninfo_unexecuted_blocks=1 00:08:03.023 00:08:03.023 ' 00:08:03.023 20:26:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:03.023 20:26:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:03.023 20:26:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.023 20:26:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.023 ************************************ 00:08:03.023 START TEST skip_rpc 00:08:03.023 ************************************ 00:08:03.023 20:26:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:03.023 20:26:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:03.023 20:26:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58190 00:08:03.023 20:26:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.023 20:26:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:03.023 [2024-11-25 20:26:11.076519] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:03.023 [2024-11-25 20:26:11.076989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:08:03.281 [2024-11-25 20:26:11.270537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.281 [2024-11-25 20:26:11.392981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58190 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58190 ']' 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58190 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.573 20:26:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58190 00:08:08.573 killing process with pid 58190 00:08:08.573 20:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.573 20:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.573 20:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58190' 00:08:08.573 20:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58190 00:08:08.573 20:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58190 00:08:10.556 00:08:10.556 real 0m7.462s 00:08:10.556 user 0m6.972s 00:08:10.556 sys 0m0.409s 00:08:10.556 ************************************ 00:08:10.556 END TEST skip_rpc 00:08:10.556 ************************************ 00:08:10.556 20:26:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.556 20:26:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.556 20:26:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:10.556 20:26:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.556 20:26:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.556 20:26:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.556 ************************************ 00:08:10.556 START TEST skip_rpc_with_json 00:08:10.556 ************************************ 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58294 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58294 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58294 ']' 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.556 20:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.556 [2024-11-25 20:26:18.593389] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:10.556 [2024-11-25 20:26:18.594149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58294 ] 00:08:10.814 [2024-11-25 20:26:18.775657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.814 [2024-11-25 20:26:18.892034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.751 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.752 [2024-11-25 20:26:19.845860] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:11.752 request: 00:08:11.752 { 00:08:11.752 "trtype": "tcp", 00:08:11.752 "method": "nvmf_get_transports", 00:08:11.752 "req_id": 1 00:08:11.752 } 00:08:11.752 Got JSON-RPC error response 00:08:11.752 response: 00:08:11.752 { 00:08:11.752 "code": -19, 00:08:11.752 "message": "No such device" 00:08:11.752 } 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.752 [2024-11-25 20:26:19.861945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.752 20:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:12.012 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.012 20:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:12.012 { 00:08:12.012 "subsystems": [ 00:08:12.012 { 00:08:12.012 "subsystem": "fsdev", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "fsdev_set_opts", 00:08:12.012 "params": { 00:08:12.012 "fsdev_io_pool_size": 65535, 00:08:12.012 "fsdev_io_cache_size": 256 00:08:12.012 } 00:08:12.012 } 00:08:12.012 ] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "keyring", 00:08:12.012 "config": [] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "iobuf", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "iobuf_set_options", 00:08:12.012 "params": { 00:08:12.012 "small_pool_count": 8192, 00:08:12.012 "large_pool_count": 1024, 00:08:12.012 "small_bufsize": 8192, 00:08:12.012 "large_bufsize": 135168, 00:08:12.012 "enable_numa": false 00:08:12.012 } 00:08:12.012 } 00:08:12.012 ] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "sock", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "sock_set_default_impl", 00:08:12.012 "params": { 00:08:12.012 "impl_name": "posix" 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "sock_impl_set_options", 00:08:12.012 "params": { 00:08:12.012 "impl_name": "ssl", 00:08:12.012 "recv_buf_size": 4096, 00:08:12.012 "send_buf_size": 4096, 00:08:12.012 "enable_recv_pipe": true, 00:08:12.012 "enable_quickack": false, 00:08:12.012 "enable_placement_id": 0, 00:08:12.012 "enable_zerocopy_send_server": true, 00:08:12.012 "enable_zerocopy_send_client": false, 00:08:12.012 "zerocopy_threshold": 0, 00:08:12.012 "tls_version": 0, 00:08:12.012 "enable_ktls": false 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "sock_impl_set_options", 00:08:12.012 "params": { 00:08:12.012 "impl_name": "posix", 00:08:12.012 "recv_buf_size": 2097152, 00:08:12.012 "send_buf_size": 2097152, 00:08:12.012 "enable_recv_pipe": true, 00:08:12.012 "enable_quickack": false, 00:08:12.012 "enable_placement_id": 0, 00:08:12.012 "enable_zerocopy_send_server": true, 00:08:12.012 "enable_zerocopy_send_client": false, 00:08:12.012 "zerocopy_threshold": 0, 00:08:12.012 "tls_version": 0, 00:08:12.012 "enable_ktls": false 00:08:12.012 } 00:08:12.012 } 00:08:12.012 ] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "vmd", 00:08:12.012 "config": [] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "accel", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "accel_set_options", 00:08:12.012 "params": { 00:08:12.012 "small_cache_size": 128, 00:08:12.012 "large_cache_size": 16, 00:08:12.012 "task_count": 2048, 00:08:12.012 "sequence_count": 2048, 00:08:12.012 "buf_count": 2048 00:08:12.012 } 00:08:12.012 } 00:08:12.012 ] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "bdev", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "bdev_set_options", 00:08:12.012 "params": { 00:08:12.012 "bdev_io_pool_size": 65535, 00:08:12.012 "bdev_io_cache_size": 256, 00:08:12.012 "bdev_auto_examine": true, 00:08:12.012 "iobuf_small_cache_size": 128, 00:08:12.012 "iobuf_large_cache_size": 16 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "bdev_raid_set_options", 00:08:12.012 "params": { 00:08:12.012 "process_window_size_kb": 1024, 00:08:12.012 "process_max_bandwidth_mb_sec": 0 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "bdev_iscsi_set_options", 00:08:12.012 "params": { 00:08:12.012 "timeout_sec": 30 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "bdev_nvme_set_options", 00:08:12.012 "params": { 00:08:12.012 "action_on_timeout": "none", 00:08:12.012 "timeout_us": 0, 00:08:12.012 "timeout_admin_us": 0, 00:08:12.012 "keep_alive_timeout_ms": 10000, 00:08:12.012 "arbitration_burst": 0, 00:08:12.012 "low_priority_weight": 0, 00:08:12.012 "medium_priority_weight": 0, 00:08:12.012 "high_priority_weight": 0, 00:08:12.012 "nvme_adminq_poll_period_us": 10000, 00:08:12.012 "nvme_ioq_poll_period_us": 0, 00:08:12.012 "io_queue_requests": 0, 00:08:12.012 "delay_cmd_submit": true, 00:08:12.012 "transport_retry_count": 4, 00:08:12.012 "bdev_retry_count": 3, 00:08:12.012 "transport_ack_timeout": 0, 00:08:12.012 "ctrlr_loss_timeout_sec": 0, 00:08:12.012 "reconnect_delay_sec": 0, 00:08:12.012 "fast_io_fail_timeout_sec": 0, 00:08:12.012 "disable_auto_failback": false, 00:08:12.012 "generate_uuids": false, 00:08:12.012 "transport_tos": 0, 00:08:12.012 "nvme_error_stat": false, 00:08:12.012 "rdma_srq_size": 0, 00:08:12.012 "io_path_stat": false, 00:08:12.012 "allow_accel_sequence": false, 00:08:12.012 "rdma_max_cq_size": 0, 00:08:12.012 "rdma_cm_event_timeout_ms": 0, 00:08:12.012 "dhchap_digests": [ 00:08:12.012 "sha256", 00:08:12.012 "sha384", 00:08:12.012 "sha512" 00:08:12.012 ], 00:08:12.012 "dhchap_dhgroups": [ 00:08:12.012 "null", 00:08:12.012 "ffdhe2048", 00:08:12.012 "ffdhe3072", 00:08:12.012 "ffdhe4096", 00:08:12.012 "ffdhe6144", 00:08:12.012 "ffdhe8192" 00:08:12.012 ] 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "bdev_nvme_set_hotplug", 00:08:12.012 "params": { 00:08:12.012 "period_us": 100000, 00:08:12.012 "enable": false 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "bdev_wait_for_examine" 00:08:12.012 } 00:08:12.012 ] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "scsi", 00:08:12.012 "config": null 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "scheduler", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "framework_set_scheduler", 00:08:12.012 "params": { 00:08:12.012 "name": "static" 00:08:12.012 } 00:08:12.012 } 00:08:12.012 ] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "vhost_scsi", 00:08:12.012 "config": [] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "vhost_blk", 00:08:12.012 "config": [] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "ublk", 00:08:12.012 "config": [] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "nbd", 00:08:12.012 "config": [] 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "subsystem": "nvmf", 00:08:12.012 "config": [ 00:08:12.012 { 00:08:12.012 "method": "nvmf_set_config", 00:08:12.012 "params": { 00:08:12.012 "discovery_filter": "match_any", 00:08:12.012 "admin_cmd_passthru": { 00:08:12.012 "identify_ctrlr": false 00:08:12.012 }, 00:08:12.012 "dhchap_digests": [ 00:08:12.012 "sha256", 00:08:12.012 "sha384", 00:08:12.012 "sha512" 00:08:12.012 ], 00:08:12.012 "dhchap_dhgroups": [ 00:08:12.012 "null", 00:08:12.012 "ffdhe2048", 00:08:12.012 "ffdhe3072", 00:08:12.012 "ffdhe4096", 00:08:12.012 "ffdhe6144", 00:08:12.012 "ffdhe8192" 00:08:12.012 ] 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "nvmf_set_max_subsystems", 00:08:12.012 "params": { 00:08:12.012 "max_subsystems": 1024 00:08:12.012 } 00:08:12.012 }, 00:08:12.012 { 00:08:12.012 "method": "nvmf_set_crdt", 00:08:12.012 "params": { 00:08:12.012 "crdt1": 0, 00:08:12.013 "crdt2": 0, 00:08:12.013 "crdt3": 0 00:08:12.013 } 00:08:12.013 }, 00:08:12.013 { 00:08:12.013 "method": "nvmf_create_transport", 00:08:12.013 "params": { 00:08:12.013 "trtype": "TCP", 00:08:12.013 "max_queue_depth": 128, 00:08:12.013 "max_io_qpairs_per_ctrlr": 127, 00:08:12.013 "in_capsule_data_size": 4096, 00:08:12.013 "max_io_size": 131072, 00:08:12.013 "io_unit_size": 131072, 00:08:12.013 "max_aq_depth": 128, 00:08:12.013 "num_shared_buffers": 511, 00:08:12.013 "buf_cache_size": 4294967295, 00:08:12.013 "dif_insert_or_strip": false, 00:08:12.013 "zcopy": false, 00:08:12.013 "c2h_success": true, 00:08:12.013 "sock_priority": 0, 00:08:12.013 "abort_timeout_sec": 1, 00:08:12.013 "ack_timeout": 0, 00:08:12.013 "data_wr_pool_size": 0 00:08:12.013 } 00:08:12.013 } 00:08:12.013 ] 00:08:12.013 }, 00:08:12.013 { 00:08:12.013 "subsystem": "iscsi", 00:08:12.013 "config": [ 00:08:12.013 { 00:08:12.013 "method": "iscsi_set_options", 00:08:12.013 "params": { 00:08:12.013 "node_base": "iqn.2016-06.io.spdk", 00:08:12.013 "max_sessions": 128, 00:08:12.013 "max_connections_per_session": 2, 00:08:12.013 "max_queue_depth": 64, 00:08:12.013 "default_time2wait": 2, 00:08:12.013 "default_time2retain": 20, 00:08:12.013 "first_burst_length": 8192, 00:08:12.013 "immediate_data": true, 00:08:12.013 "allow_duplicated_isid": false, 00:08:12.013 "error_recovery_level": 0, 00:08:12.013 "nop_timeout": 60, 00:08:12.013 "nop_in_interval": 30, 00:08:12.013 "disable_chap": false, 00:08:12.013 "require_chap": false, 00:08:12.013 "mutual_chap": false, 00:08:12.013 "chap_group": 0, 00:08:12.013 "max_large_datain_per_connection": 64, 00:08:12.013 "max_r2t_per_connection": 4, 00:08:12.013 "pdu_pool_size": 36864, 00:08:12.013 "immediate_data_pool_size": 16384, 00:08:12.013 "data_out_pool_size": 2048 00:08:12.013 } 00:08:12.013 } 00:08:12.013 ] 00:08:12.013 } 00:08:12.013 ] 00:08:12.013 } 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58294 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58294 ']' 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58294 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58294 00:08:12.013 killing process with pid 58294 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58294' 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58294 00:08:12.013 20:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58294 00:08:14.582 20:26:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58350 00:08:14.582 20:26:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:14.582 20:26:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58350 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58350 ']' 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58350 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58350 00:08:19.864 killing process with pid 58350 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58350' 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58350 00:08:19.864 20:26:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58350 00:08:22.396 20:26:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:22.397 20:26:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:22.397 ************************************ 00:08:22.397 END TEST skip_rpc_with_json 00:08:22.397 ************************************ 00:08:22.397 00:08:22.397 real 0m11.515s 00:08:22.397 user 0m10.965s 00:08:22.397 sys 0m0.925s 00:08:22.397 20:26:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.397 20:26:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:22.397 20:26:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:22.397 20:26:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.397 20:26:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.397 20:26:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.397 ************************************ 00:08:22.397 START TEST skip_rpc_with_delay 00:08:22.397 ************************************ 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:22.397 [2024-11-25 20:26:30.180010] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.397 ************************************ 00:08:22.397 END TEST skip_rpc_with_delay 00:08:22.397 ************************************ 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.397 00:08:22.397 real 0m0.185s 00:08:22.397 user 0m0.093s 00:08:22.397 sys 0m0.090s 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.397 20:26:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:22.397 20:26:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:22.397 20:26:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:22.397 20:26:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:22.397 20:26:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.397 20:26:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.397 20:26:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.397 ************************************ 00:08:22.397 START TEST exit_on_failed_rpc_init 00:08:22.397 ************************************ 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:22.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58489 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58489 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58489 ']' 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.397 20:26:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:22.397 [2024-11-25 20:26:30.454069] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:22.397 [2024-11-25 20:26:30.454396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58489 ] 00:08:22.656 [2024-11-25 20:26:30.641382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.656 [2024-11-25 20:26:30.761686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:23.592 20:26:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:23.851 [2024-11-25 20:26:31.773307] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:23.851 [2024-11-25 20:26:31.773904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58507 ] 00:08:23.851 [2024-11-25 20:26:31.954378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.111 [2024-11-25 20:26:32.082712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.111 [2024-11-25 20:26:32.082830] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:24.111 [2024-11-25 20:26:32.082856] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:24.111 [2024-11-25 20:26:32.082882] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58489 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58489 ']' 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58489 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58489 00:08:24.369 killing process with pid 58489 00:08:24.369 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.370 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.370 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58489' 00:08:24.370 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58489 00:08:24.370 20:26:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58489 00:08:26.904 00:08:26.904 real 0m4.573s 00:08:26.904 user 0m4.866s 00:08:26.904 sys 0m0.653s 00:08:26.904 20:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.904 20:26:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:26.904 ************************************ 00:08:26.904 END TEST exit_on_failed_rpc_init 00:08:26.904 ************************************ 00:08:26.904 20:26:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:26.904 00:08:26.904 real 0m24.273s 00:08:26.904 user 0m23.130s 00:08:26.904 sys 0m2.380s 00:08:26.904 ************************************ 00:08:26.904 END TEST skip_rpc 00:08:26.904 ************************************ 00:08:26.904 20:26:34 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.904 20:26:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.904 20:26:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:26.904 20:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.904 20:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.904 20:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:27.163 ************************************ 00:08:27.163 START TEST rpc_client 00:08:27.163 ************************************ 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:27.163 * Looking for test storage... 00:08:27.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.163 20:26:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.163 --rc genhtml_branch_coverage=1 00:08:27.163 --rc genhtml_function_coverage=1 00:08:27.163 --rc genhtml_legend=1 00:08:27.163 --rc geninfo_all_blocks=1 00:08:27.163 --rc geninfo_unexecuted_blocks=1 00:08:27.163 00:08:27.163 ' 00:08:27.163 20:26:35 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.163 --rc genhtml_branch_coverage=1 00:08:27.163 --rc genhtml_function_coverage=1 00:08:27.163 --rc genhtml_legend=1 00:08:27.163 --rc geninfo_all_blocks=1 00:08:27.163 --rc geninfo_unexecuted_blocks=1 00:08:27.163 00:08:27.163 ' 00:08:27.164 20:26:35 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.164 --rc genhtml_branch_coverage=1 00:08:27.164 --rc genhtml_function_coverage=1 00:08:27.164 --rc genhtml_legend=1 00:08:27.164 --rc geninfo_all_blocks=1 00:08:27.164 --rc geninfo_unexecuted_blocks=1 00:08:27.164 00:08:27.164 ' 00:08:27.164 20:26:35 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.164 --rc genhtml_branch_coverage=1 00:08:27.164 --rc genhtml_function_coverage=1 00:08:27.164 --rc genhtml_legend=1 00:08:27.164 --rc geninfo_all_blocks=1 00:08:27.164 --rc geninfo_unexecuted_blocks=1 00:08:27.164 00:08:27.164 ' 00:08:27.164 20:26:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:27.422 OK 00:08:27.422 20:26:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:27.422 ************************************ 00:08:27.422 END TEST rpc_client 00:08:27.422 ************************************ 00:08:27.422 00:08:27.422 real 0m0.305s 00:08:27.422 user 0m0.164s 00:08:27.422 sys 0m0.152s 00:08:27.422 20:26:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.422 20:26:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:27.422 20:26:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:27.422 20:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.422 20:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.422 20:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:27.422 ************************************ 00:08:27.422 START TEST json_config 00:08:27.422 ************************************ 00:08:27.422 20:26:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:27.422 20:26:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.422 20:26:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.422 20:26:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.683 20:26:35 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.683 20:26:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.683 20:26:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.683 20:26:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.683 20:26:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.683 20:26:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.683 20:26:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:27.683 20:26:35 json_config -- scripts/common.sh@345 -- # : 1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.683 20:26:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.683 20:26:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@353 -- # local d=1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.683 20:26:35 json_config -- scripts/common.sh@355 -- # echo 1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.683 20:26:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@353 -- # local d=2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.683 20:26:35 json_config -- scripts/common.sh@355 -- # echo 2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.683 20:26:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.683 20:26:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.683 20:26:35 json_config -- scripts/common.sh@368 -- # return 0 00:08:27.683 20:26:35 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.683 20:26:35 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.683 --rc genhtml_branch_coverage=1 00:08:27.683 --rc genhtml_function_coverage=1 00:08:27.683 --rc genhtml_legend=1 00:08:27.683 --rc geninfo_all_blocks=1 00:08:27.683 --rc geninfo_unexecuted_blocks=1 00:08:27.683 00:08:27.683 ' 00:08:27.683 20:26:35 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.683 --rc genhtml_branch_coverage=1 00:08:27.683 --rc genhtml_function_coverage=1 00:08:27.683 --rc genhtml_legend=1 00:08:27.683 --rc geninfo_all_blocks=1 00:08:27.683 --rc geninfo_unexecuted_blocks=1 00:08:27.683 00:08:27.683 ' 00:08:27.683 20:26:35 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.683 --rc genhtml_branch_coverage=1 00:08:27.683 --rc genhtml_function_coverage=1 00:08:27.683 --rc genhtml_legend=1 00:08:27.683 --rc geninfo_all_blocks=1 00:08:27.683 --rc geninfo_unexecuted_blocks=1 00:08:27.683 00:08:27.683 ' 00:08:27.683 20:26:35 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.683 --rc genhtml_branch_coverage=1 00:08:27.683 --rc genhtml_function_coverage=1 00:08:27.683 --rc genhtml_legend=1 00:08:27.683 --rc geninfo_all_blocks=1 00:08:27.683 --rc geninfo_unexecuted_blocks=1 00:08:27.683 00:08:27.683 ' 00:08:27.683 20:26:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5d97dcf-b1c6-43b0-8642-7f1ad1f07ee4 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b5d97dcf-b1c6-43b0-8642-7f1ad1f07ee4 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.683 20:26:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.683 20:26:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.683 20:26:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.683 20:26:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.683 20:26:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.683 20:26:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.683 20:26:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.683 20:26:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.683 20:26:35 json_config -- paths/export.sh@5 -- # export PATH 00:08:27.684 20:26:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@51 -- # : 0 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.684 20:26:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:27.684 WARNING: No tests are enabled so not running JSON configuration tests 00:08:27.684 20:26:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:27.684 00:08:27.684 real 0m0.236s 00:08:27.684 user 0m0.141s 00:08:27.684 sys 0m0.094s 00:08:27.684 20:26:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.684 20:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:27.684 ************************************ 00:08:27.684 END TEST json_config 00:08:27.684 ************************************ 00:08:27.684 20:26:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:27.684 20:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.684 20:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.684 20:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:27.684 ************************************ 00:08:27.684 START TEST json_config_extra_key 00:08:27.684 ************************************ 00:08:27.684 20:26:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.944 20:26:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.944 --rc genhtml_branch_coverage=1 00:08:27.944 --rc genhtml_function_coverage=1 00:08:27.944 --rc genhtml_legend=1 00:08:27.944 --rc geninfo_all_blocks=1 00:08:27.944 --rc geninfo_unexecuted_blocks=1 00:08:27.944 00:08:27.944 ' 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.944 --rc genhtml_branch_coverage=1 00:08:27.944 --rc genhtml_function_coverage=1 00:08:27.944 --rc genhtml_legend=1 00:08:27.944 --rc geninfo_all_blocks=1 00:08:27.944 --rc geninfo_unexecuted_blocks=1 00:08:27.944 00:08:27.944 ' 00:08:27.944 20:26:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.945 --rc genhtml_branch_coverage=1 00:08:27.945 --rc genhtml_function_coverage=1 00:08:27.945 --rc genhtml_legend=1 00:08:27.945 --rc geninfo_all_blocks=1 00:08:27.945 --rc geninfo_unexecuted_blocks=1 00:08:27.945 00:08:27.945 ' 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.945 --rc genhtml_branch_coverage=1 00:08:27.945 --rc genhtml_function_coverage=1 00:08:27.945 --rc genhtml_legend=1 00:08:27.945 --rc geninfo_all_blocks=1 00:08:27.945 --rc geninfo_unexecuted_blocks=1 00:08:27.945 00:08:27.945 ' 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b5d97dcf-b1c6-43b0-8642-7f1ad1f07ee4 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b5d97dcf-b1c6-43b0-8642-7f1ad1f07ee4 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.945 20:26:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.945 20:26:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.945 20:26:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.945 20:26:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.945 20:26:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.945 20:26:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.945 20:26:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.945 20:26:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:27.945 20:26:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.945 20:26:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:27.945 INFO: launching applications... 00:08:27.945 20:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58717 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:27.945 Waiting for target to run... 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58717 /var/tmp/spdk_tgt.sock 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58717 ']' 00:08:27.945 20:26:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:27.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.945 20:26:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:28.204 [2024-11-25 20:26:36.076384] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:28.204 [2024-11-25 20:26:36.076524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58717 ] 00:08:28.462 [2024-11-25 20:26:36.481677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.462 [2024-11-25 20:26:36.588810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.397 00:08:29.397 INFO: shutting down applications... 00:08:29.398 20:26:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.398 20:26:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:29.398 20:26:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:29.398 20:26:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58717 ]] 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58717 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:29.398 20:26:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.985 20:26:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.985 20:26:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.985 20:26:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:29.985 20:26:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.553 20:26:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.553 20:26:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.553 20:26:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:30.553 20:26:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.814 20:26:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.814 20:26:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.814 20:26:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:30.814 20:26:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:31.383 20:26:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:31.383 20:26:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:31.383 20:26:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:31.383 20:26:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:31.952 20:26:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:31.952 20:26:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:31.952 20:26:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:31.952 20:26:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58717 00:08:32.522 SPDK target shutdown done 00:08:32.522 Success 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:32.522 20:26:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:32.522 20:26:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:32.522 00:08:32.522 real 0m4.681s 00:08:32.522 user 0m4.246s 00:08:32.522 sys 0m0.620s 00:08:32.522 20:26:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.522 20:26:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:32.522 ************************************ 00:08:32.522 END TEST json_config_extra_key 00:08:32.522 ************************************ 00:08:32.522 20:26:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:32.522 20:26:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.522 20:26:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.522 20:26:40 -- common/autotest_common.sh@10 -- # set +x 00:08:32.522 ************************************ 00:08:32.522 START TEST alias_rpc 00:08:32.522 ************************************ 00:08:32.522 20:26:40 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:32.522 * Looking for test storage... 00:08:32.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:32.522 20:26:40 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.522 20:26:40 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.522 20:26:40 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.782 20:26:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.782 --rc genhtml_branch_coverage=1 00:08:32.782 --rc genhtml_function_coverage=1 00:08:32.782 --rc genhtml_legend=1 00:08:32.782 --rc geninfo_all_blocks=1 00:08:32.782 --rc geninfo_unexecuted_blocks=1 00:08:32.782 00:08:32.782 ' 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.782 --rc genhtml_branch_coverage=1 00:08:32.782 --rc genhtml_function_coverage=1 00:08:32.782 --rc genhtml_legend=1 00:08:32.782 --rc geninfo_all_blocks=1 00:08:32.782 --rc geninfo_unexecuted_blocks=1 00:08:32.782 00:08:32.782 ' 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.782 --rc genhtml_branch_coverage=1 00:08:32.782 --rc genhtml_function_coverage=1 00:08:32.782 --rc genhtml_legend=1 00:08:32.782 --rc geninfo_all_blocks=1 00:08:32.782 --rc geninfo_unexecuted_blocks=1 00:08:32.782 00:08:32.782 ' 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.782 --rc genhtml_branch_coverage=1 00:08:32.782 --rc genhtml_function_coverage=1 00:08:32.782 --rc genhtml_legend=1 00:08:32.782 --rc geninfo_all_blocks=1 00:08:32.782 --rc geninfo_unexecuted_blocks=1 00:08:32.782 00:08:32.782 ' 00:08:32.782 20:26:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:32.782 20:26:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:32.782 20:26:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58834 00:08:32.782 20:26:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58834 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58834 ']' 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.782 20:26:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.782 [2024-11-25 20:26:40.857257] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:32.783 [2024-11-25 20:26:40.858396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58834 ] 00:08:33.047 [2024-11-25 20:26:41.057356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.307 [2024-11-25 20:26:41.179719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.242 20:26:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.242 20:26:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:34.242 20:26:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:34.506 20:26:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58834 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58834 ']' 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58834 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58834 00:08:34.506 killing process with pid 58834 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58834' 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 58834 00:08:34.506 20:26:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 58834 00:08:37.066 ************************************ 00:08:37.066 END TEST alias_rpc 00:08:37.066 ************************************ 00:08:37.066 00:08:37.066 real 0m4.410s 00:08:37.066 user 0m4.483s 00:08:37.066 sys 0m0.642s 00:08:37.066 20:26:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.066 20:26:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.066 20:26:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:37.066 20:26:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:37.066 20:26:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.066 20:26:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.066 20:26:44 -- common/autotest_common.sh@10 -- # set +x 00:08:37.066 ************************************ 00:08:37.066 START TEST spdkcli_tcp 00:08:37.066 ************************************ 00:08:37.066 20:26:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:37.066 * Looking for test storage... 00:08:37.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:37.066 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.066 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.066 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.327 20:26:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.327 --rc genhtml_branch_coverage=1 00:08:37.327 --rc genhtml_function_coverage=1 00:08:37.327 --rc genhtml_legend=1 00:08:37.327 --rc geninfo_all_blocks=1 00:08:37.327 --rc geninfo_unexecuted_blocks=1 00:08:37.327 00:08:37.327 ' 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.327 --rc genhtml_branch_coverage=1 00:08:37.327 --rc genhtml_function_coverage=1 00:08:37.327 --rc genhtml_legend=1 00:08:37.327 --rc geninfo_all_blocks=1 00:08:37.327 --rc geninfo_unexecuted_blocks=1 00:08:37.327 00:08:37.327 ' 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.327 --rc genhtml_branch_coverage=1 00:08:37.327 --rc genhtml_function_coverage=1 00:08:37.327 --rc genhtml_legend=1 00:08:37.327 --rc geninfo_all_blocks=1 00:08:37.327 --rc geninfo_unexecuted_blocks=1 00:08:37.327 00:08:37.327 ' 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.327 --rc genhtml_branch_coverage=1 00:08:37.327 --rc genhtml_function_coverage=1 00:08:37.327 --rc genhtml_legend=1 00:08:37.327 --rc geninfo_all_blocks=1 00:08:37.327 --rc geninfo_unexecuted_blocks=1 00:08:37.327 00:08:37.327 ' 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58941 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58941 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58941 ']' 00:08:37.327 20:26:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.327 20:26:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.327 [2024-11-25 20:26:45.346453] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:37.327 [2024-11-25 20:26:45.347062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 00:08:37.586 [2024-11-25 20:26:45.533453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.586 [2024-11-25 20:26:45.650201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.586 [2024-11-25 20:26:45.650247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.519 20:26:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.519 20:26:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:38.519 20:26:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58964 00:08:38.519 20:26:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:38.519 20:26:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:38.777 [ 00:08:38.777 "bdev_malloc_delete", 00:08:38.777 "bdev_malloc_create", 00:08:38.777 "bdev_null_resize", 00:08:38.777 "bdev_null_delete", 00:08:38.777 "bdev_null_create", 00:08:38.777 "bdev_nvme_cuse_unregister", 00:08:38.777 "bdev_nvme_cuse_register", 00:08:38.777 "bdev_opal_new_user", 00:08:38.777 "bdev_opal_set_lock_state", 00:08:38.777 "bdev_opal_delete", 00:08:38.777 "bdev_opal_get_info", 00:08:38.777 "bdev_opal_create", 00:08:38.777 "bdev_nvme_opal_revert", 00:08:38.777 "bdev_nvme_opal_init", 00:08:38.777 "bdev_nvme_send_cmd", 00:08:38.777 "bdev_nvme_set_keys", 00:08:38.777 "bdev_nvme_get_path_iostat", 00:08:38.777 "bdev_nvme_get_mdns_discovery_info", 00:08:38.777 "bdev_nvme_stop_mdns_discovery", 00:08:38.777 "bdev_nvme_start_mdns_discovery", 00:08:38.777 "bdev_nvme_set_multipath_policy", 00:08:38.777 "bdev_nvme_set_preferred_path", 00:08:38.777 "bdev_nvme_get_io_paths", 00:08:38.777 "bdev_nvme_remove_error_injection", 00:08:38.777 "bdev_nvme_add_error_injection", 00:08:38.777 "bdev_nvme_get_discovery_info", 00:08:38.777 "bdev_nvme_stop_discovery", 00:08:38.777 "bdev_nvme_start_discovery", 00:08:38.777 "bdev_nvme_get_controller_health_info", 00:08:38.777 "bdev_nvme_disable_controller", 00:08:38.777 "bdev_nvme_enable_controller", 00:08:38.777 "bdev_nvme_reset_controller", 00:08:38.777 "bdev_nvme_get_transport_statistics", 00:08:38.777 "bdev_nvme_apply_firmware", 00:08:38.777 "bdev_nvme_detach_controller", 00:08:38.777 "bdev_nvme_get_controllers", 00:08:38.777 "bdev_nvme_attach_controller", 00:08:38.777 "bdev_nvme_set_hotplug", 00:08:38.777 "bdev_nvme_set_options", 00:08:38.777 "bdev_passthru_delete", 00:08:38.777 "bdev_passthru_create", 00:08:38.777 "bdev_lvol_set_parent_bdev", 00:08:38.777 "bdev_lvol_set_parent", 00:08:38.777 "bdev_lvol_check_shallow_copy", 00:08:38.777 "bdev_lvol_start_shallow_copy", 00:08:38.777 "bdev_lvol_grow_lvstore", 00:08:38.777 "bdev_lvol_get_lvols", 00:08:38.777 "bdev_lvol_get_lvstores", 00:08:38.777 "bdev_lvol_delete", 00:08:38.777 "bdev_lvol_set_read_only", 00:08:38.777 "bdev_lvol_resize", 00:08:38.777 "bdev_lvol_decouple_parent", 00:08:38.777 "bdev_lvol_inflate", 00:08:38.777 "bdev_lvol_rename", 00:08:38.777 "bdev_lvol_clone_bdev", 00:08:38.777 "bdev_lvol_clone", 00:08:38.777 "bdev_lvol_snapshot", 00:08:38.777 "bdev_lvol_create", 00:08:38.777 "bdev_lvol_delete_lvstore", 00:08:38.777 "bdev_lvol_rename_lvstore", 00:08:38.777 "bdev_lvol_create_lvstore", 00:08:38.777 "bdev_raid_set_options", 00:08:38.777 "bdev_raid_remove_base_bdev", 00:08:38.777 "bdev_raid_add_base_bdev", 00:08:38.777 "bdev_raid_delete", 00:08:38.777 "bdev_raid_create", 00:08:38.777 "bdev_raid_get_bdevs", 00:08:38.777 "bdev_error_inject_error", 00:08:38.777 "bdev_error_delete", 00:08:38.777 "bdev_error_create", 00:08:38.777 "bdev_split_delete", 00:08:38.777 "bdev_split_create", 00:08:38.777 "bdev_delay_delete", 00:08:38.777 "bdev_delay_create", 00:08:38.777 "bdev_delay_update_latency", 00:08:38.777 "bdev_zone_block_delete", 00:08:38.777 "bdev_zone_block_create", 00:08:38.777 "blobfs_create", 00:08:38.777 "blobfs_detect", 00:08:38.777 "blobfs_set_cache_size", 00:08:38.777 "bdev_xnvme_delete", 00:08:38.777 "bdev_xnvme_create", 00:08:38.777 "bdev_aio_delete", 00:08:38.777 "bdev_aio_rescan", 00:08:38.777 "bdev_aio_create", 00:08:38.777 "bdev_ftl_set_property", 00:08:38.777 "bdev_ftl_get_properties", 00:08:38.777 "bdev_ftl_get_stats", 00:08:38.777 "bdev_ftl_unmap", 00:08:38.777 "bdev_ftl_unload", 00:08:38.777 "bdev_ftl_delete", 00:08:38.777 "bdev_ftl_load", 00:08:38.777 "bdev_ftl_create", 00:08:38.777 "bdev_virtio_attach_controller", 00:08:38.777 "bdev_virtio_scsi_get_devices", 00:08:38.777 "bdev_virtio_detach_controller", 00:08:38.777 "bdev_virtio_blk_set_hotplug", 00:08:38.777 "bdev_iscsi_delete", 00:08:38.777 "bdev_iscsi_create", 00:08:38.777 "bdev_iscsi_set_options", 00:08:38.777 "accel_error_inject_error", 00:08:38.777 "ioat_scan_accel_module", 00:08:38.777 "dsa_scan_accel_module", 00:08:38.777 "iaa_scan_accel_module", 00:08:38.777 "keyring_file_remove_key", 00:08:38.777 "keyring_file_add_key", 00:08:38.777 "keyring_linux_set_options", 00:08:38.777 "fsdev_aio_delete", 00:08:38.777 "fsdev_aio_create", 00:08:38.777 "iscsi_get_histogram", 00:08:38.777 "iscsi_enable_histogram", 00:08:38.777 "iscsi_set_options", 00:08:38.777 "iscsi_get_auth_groups", 00:08:38.777 "iscsi_auth_group_remove_secret", 00:08:38.777 "iscsi_auth_group_add_secret", 00:08:38.777 "iscsi_delete_auth_group", 00:08:38.777 "iscsi_create_auth_group", 00:08:38.777 "iscsi_set_discovery_auth", 00:08:38.777 "iscsi_get_options", 00:08:38.777 "iscsi_target_node_request_logout", 00:08:38.777 "iscsi_target_node_set_redirect", 00:08:38.777 "iscsi_target_node_set_auth", 00:08:38.777 "iscsi_target_node_add_lun", 00:08:38.777 "iscsi_get_stats", 00:08:38.777 "iscsi_get_connections", 00:08:38.777 "iscsi_portal_group_set_auth", 00:08:38.777 "iscsi_start_portal_group", 00:08:38.777 "iscsi_delete_portal_group", 00:08:38.777 "iscsi_create_portal_group", 00:08:38.777 "iscsi_get_portal_groups", 00:08:38.777 "iscsi_delete_target_node", 00:08:38.777 "iscsi_target_node_remove_pg_ig_maps", 00:08:38.777 "iscsi_target_node_add_pg_ig_maps", 00:08:38.777 "iscsi_create_target_node", 00:08:38.777 "iscsi_get_target_nodes", 00:08:38.777 "iscsi_delete_initiator_group", 00:08:38.777 "iscsi_initiator_group_remove_initiators", 00:08:38.777 "iscsi_initiator_group_add_initiators", 00:08:38.777 "iscsi_create_initiator_group", 00:08:38.777 "iscsi_get_initiator_groups", 00:08:38.777 "nvmf_set_crdt", 00:08:38.777 "nvmf_set_config", 00:08:38.777 "nvmf_set_max_subsystems", 00:08:38.777 "nvmf_stop_mdns_prr", 00:08:38.777 "nvmf_publish_mdns_prr", 00:08:38.777 "nvmf_subsystem_get_listeners", 00:08:38.777 "nvmf_subsystem_get_qpairs", 00:08:38.777 "nvmf_subsystem_get_controllers", 00:08:38.778 "nvmf_get_stats", 00:08:38.778 "nvmf_get_transports", 00:08:38.778 "nvmf_create_transport", 00:08:38.778 "nvmf_get_targets", 00:08:38.778 "nvmf_delete_target", 00:08:38.778 "nvmf_create_target", 00:08:38.778 "nvmf_subsystem_allow_any_host", 00:08:38.778 "nvmf_subsystem_set_keys", 00:08:38.778 "nvmf_subsystem_remove_host", 00:08:38.778 "nvmf_subsystem_add_host", 00:08:38.778 "nvmf_ns_remove_host", 00:08:38.778 "nvmf_ns_add_host", 00:08:38.778 "nvmf_subsystem_remove_ns", 00:08:38.778 "nvmf_subsystem_set_ns_ana_group", 00:08:38.778 "nvmf_subsystem_add_ns", 00:08:38.778 "nvmf_subsystem_listener_set_ana_state", 00:08:38.778 "nvmf_discovery_get_referrals", 00:08:38.778 "nvmf_discovery_remove_referral", 00:08:38.778 "nvmf_discovery_add_referral", 00:08:38.778 "nvmf_subsystem_remove_listener", 00:08:38.778 "nvmf_subsystem_add_listener", 00:08:38.778 "nvmf_delete_subsystem", 00:08:38.778 "nvmf_create_subsystem", 00:08:38.778 "nvmf_get_subsystems", 00:08:38.778 "env_dpdk_get_mem_stats", 00:08:38.778 "nbd_get_disks", 00:08:38.778 "nbd_stop_disk", 00:08:38.778 "nbd_start_disk", 00:08:38.778 "ublk_recover_disk", 00:08:38.778 "ublk_get_disks", 00:08:38.778 "ublk_stop_disk", 00:08:38.778 "ublk_start_disk", 00:08:38.778 "ublk_destroy_target", 00:08:38.778 "ublk_create_target", 00:08:38.778 "virtio_blk_create_transport", 00:08:38.778 "virtio_blk_get_transports", 00:08:38.778 "vhost_controller_set_coalescing", 00:08:38.778 "vhost_get_controllers", 00:08:38.778 "vhost_delete_controller", 00:08:38.778 "vhost_create_blk_controller", 00:08:38.778 "vhost_scsi_controller_remove_target", 00:08:38.778 "vhost_scsi_controller_add_target", 00:08:38.778 "vhost_start_scsi_controller", 00:08:38.778 "vhost_create_scsi_controller", 00:08:38.778 "thread_set_cpumask", 00:08:38.778 "scheduler_set_options", 00:08:38.778 "framework_get_governor", 00:08:38.778 "framework_get_scheduler", 00:08:38.778 "framework_set_scheduler", 00:08:38.778 "framework_get_reactors", 00:08:38.778 "thread_get_io_channels", 00:08:38.778 "thread_get_pollers", 00:08:38.778 "thread_get_stats", 00:08:38.778 "framework_monitor_context_switch", 00:08:38.778 "spdk_kill_instance", 00:08:38.778 "log_enable_timestamps", 00:08:38.778 "log_get_flags", 00:08:38.778 "log_clear_flag", 00:08:38.778 "log_set_flag", 00:08:38.778 "log_get_level", 00:08:38.778 "log_set_level", 00:08:38.778 "log_get_print_level", 00:08:38.778 "log_set_print_level", 00:08:38.778 "framework_enable_cpumask_locks", 00:08:38.778 "framework_disable_cpumask_locks", 00:08:38.778 "framework_wait_init", 00:08:38.778 "framework_start_init", 00:08:38.778 "scsi_get_devices", 00:08:38.778 "bdev_get_histogram", 00:08:38.778 "bdev_enable_histogram", 00:08:38.778 "bdev_set_qos_limit", 00:08:38.778 "bdev_set_qd_sampling_period", 00:08:38.778 "bdev_get_bdevs", 00:08:38.778 "bdev_reset_iostat", 00:08:38.778 "bdev_get_iostat", 00:08:38.778 "bdev_examine", 00:08:38.778 "bdev_wait_for_examine", 00:08:38.778 "bdev_set_options", 00:08:38.778 "accel_get_stats", 00:08:38.778 "accel_set_options", 00:08:38.778 "accel_set_driver", 00:08:38.778 "accel_crypto_key_destroy", 00:08:38.778 "accel_crypto_keys_get", 00:08:38.778 "accel_crypto_key_create", 00:08:38.778 "accel_assign_opc", 00:08:38.778 "accel_get_module_info", 00:08:38.778 "accel_get_opc_assignments", 00:08:38.778 "vmd_rescan", 00:08:38.778 "vmd_remove_device", 00:08:38.778 "vmd_enable", 00:08:38.778 "sock_get_default_impl", 00:08:38.778 "sock_set_default_impl", 00:08:38.778 "sock_impl_set_options", 00:08:38.778 "sock_impl_get_options", 00:08:38.778 "iobuf_get_stats", 00:08:38.778 "iobuf_set_options", 00:08:38.778 "keyring_get_keys", 00:08:38.778 "framework_get_pci_devices", 00:08:38.778 "framework_get_config", 00:08:38.778 "framework_get_subsystems", 00:08:38.778 "fsdev_set_opts", 00:08:38.778 "fsdev_get_opts", 00:08:38.778 "trace_get_info", 00:08:38.778 "trace_get_tpoint_group_mask", 00:08:38.778 "trace_disable_tpoint_group", 00:08:38.778 "trace_enable_tpoint_group", 00:08:38.778 "trace_clear_tpoint_mask", 00:08:38.778 "trace_set_tpoint_mask", 00:08:38.778 "notify_get_notifications", 00:08:38.778 "notify_get_types", 00:08:38.778 "spdk_get_version", 00:08:38.778 "rpc_get_methods" 00:08:38.778 ] 00:08:38.778 20:26:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.778 20:26:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:38.778 20:26:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58941 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58941 ']' 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58941 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58941 00:08:38.778 killing process with pid 58941 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58941' 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58941 00:08:38.778 20:26:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58941 00:08:41.386 ************************************ 00:08:41.386 END TEST spdkcli_tcp 00:08:41.386 ************************************ 00:08:41.386 00:08:41.386 real 0m4.316s 00:08:41.386 user 0m7.654s 00:08:41.386 sys 0m0.696s 00:08:41.386 20:26:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.386 20:26:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 20:26:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:41.386 20:26:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.386 20:26:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.386 20:26:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 ************************************ 00:08:41.386 START TEST dpdk_mem_utility 00:08:41.386 ************************************ 00:08:41.386 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:41.386 * Looking for test storage... 00:08:41.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:41.386 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.386 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.386 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.644 20:26:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.644 --rc genhtml_branch_coverage=1 00:08:41.644 --rc genhtml_function_coverage=1 00:08:41.644 --rc genhtml_legend=1 00:08:41.644 --rc geninfo_all_blocks=1 00:08:41.644 --rc geninfo_unexecuted_blocks=1 00:08:41.644 00:08:41.644 ' 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.644 --rc genhtml_branch_coverage=1 00:08:41.644 --rc genhtml_function_coverage=1 00:08:41.644 --rc genhtml_legend=1 00:08:41.644 --rc geninfo_all_blocks=1 00:08:41.644 --rc geninfo_unexecuted_blocks=1 00:08:41.644 00:08:41.644 ' 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.644 --rc genhtml_branch_coverage=1 00:08:41.644 --rc genhtml_function_coverage=1 00:08:41.644 --rc genhtml_legend=1 00:08:41.644 --rc geninfo_all_blocks=1 00:08:41.644 --rc geninfo_unexecuted_blocks=1 00:08:41.644 00:08:41.644 ' 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.644 --rc genhtml_branch_coverage=1 00:08:41.644 --rc genhtml_function_coverage=1 00:08:41.644 --rc genhtml_legend=1 00:08:41.644 --rc geninfo_all_blocks=1 00:08:41.644 --rc geninfo_unexecuted_blocks=1 00:08:41.644 00:08:41.644 ' 00:08:41.644 20:26:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:41.644 20:26:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:41.644 20:26:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59069 00:08:41.644 20:26:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59069 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59069 ']' 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.644 20:26:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:41.644 [2024-11-25 20:26:49.755857] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:41.644 [2024-11-25 20:26:49.756363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:08:41.903 [2024-11-25 20:26:49.971947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.162 [2024-11-25 20:26:50.092836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.097 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.097 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:43.097 20:26:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:43.097 20:26:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:43.097 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.097 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 { 00:08:43.097 "filename": "/tmp/spdk_mem_dump.txt" 00:08:43.097 } 00:08:43.098 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.098 20:26:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:43.098 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:43.098 1 heaps totaling size 816.000000 MiB 00:08:43.098 size: 816.000000 MiB heap id: 0 00:08:43.098 end heaps---------- 00:08:43.098 9 mempools totaling size 595.772034 MiB 00:08:43.098 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:43.098 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:43.098 size: 92.545471 MiB name: bdev_io_59069 00:08:43.098 size: 50.003479 MiB name: msgpool_59069 00:08:43.098 size: 36.509338 MiB name: fsdev_io_59069 00:08:43.098 size: 21.763794 MiB name: PDU_Pool 00:08:43.098 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:43.098 size: 4.133484 MiB name: evtpool_59069 00:08:43.098 size: 0.026123 MiB name: Session_Pool 00:08:43.098 end mempools------- 00:08:43.098 6 memzones totaling size 4.142822 MiB 00:08:43.098 size: 1.000366 MiB name: RG_ring_0_59069 00:08:43.098 size: 1.000366 MiB name: RG_ring_1_59069 00:08:43.098 size: 1.000366 MiB name: RG_ring_4_59069 00:08:43.098 size: 1.000366 MiB name: RG_ring_5_59069 00:08:43.098 size: 0.125366 MiB name: RG_ring_2_59069 00:08:43.098 size: 0.015991 MiB name: RG_ring_3_59069 00:08:43.098 end memzones------- 00:08:43.098 20:26:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:43.098 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:08:43.098 list of free elements. size: 16.790649 MiB 00:08:43.098 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:43.098 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:43.098 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:43.098 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:43.098 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:43.098 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:43.098 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:43.098 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:43.098 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:43.098 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:43.098 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:43.098 element at address: 0x20001ac00000 with size: 0.561218 MiB 00:08:43.098 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:43.098 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:43.098 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:43.098 element at address: 0x200012c00000 with size: 0.443237 MiB 00:08:43.098 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:43.098 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:43.098 list of standard malloc elements. size: 199.288452 MiB 00:08:43.098 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:43.098 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:43.098 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:43.098 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:43.098 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:43.098 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:43.098 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:43.098 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:43.098 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:43.098 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:43.098 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:43.098 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:43.098 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:43.098 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71780 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:43.099 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:43.099 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:43.099 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:43.099 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:43.100 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:43.100 list of memzone associated elements. size: 599.920898 MiB 00:08:43.100 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:43.100 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:43.100 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:43.100 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:43.100 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:43.100 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59069_0 00:08:43.100 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:43.100 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59069_0 00:08:43.100 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:43.100 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59069_0 00:08:43.100 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:43.100 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:43.100 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:43.100 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:43.100 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:43.100 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59069_0 00:08:43.100 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:43.100 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59069 00:08:43.100 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:43.100 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59069 00:08:43.100 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:43.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:43.100 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:43.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:43.100 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:43.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:43.100 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:43.100 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:43.100 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:43.100 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59069 00:08:43.100 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:43.100 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59069 00:08:43.100 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:43.100 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59069 00:08:43.100 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:43.100 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59069 00:08:43.100 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:43.100 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59069 00:08:43.100 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:43.100 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59069 00:08:43.100 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:43.100 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:43.100 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:43.100 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:43.100 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:43.100 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:43.100 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:43.100 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59069 00:08:43.100 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:43.100 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59069 00:08:43.100 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:43.100 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:43.100 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:43.100 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:43.100 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:43.100 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59069 00:08:43.100 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:43.100 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:43.100 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:43.100 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59069 00:08:43.100 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:43.100 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59069 00:08:43.100 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:43.100 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59069 00:08:43.100 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:43.100 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:43.100 20:26:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:43.100 20:26:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59069 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59069 ']' 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59069 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59069 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59069' 00:08:43.100 killing process with pid 59069 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59069 00:08:43.100 20:26:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59069 00:08:45.629 00:08:45.629 real 0m4.358s 00:08:45.629 user 0m4.200s 00:08:45.629 sys 0m0.702s 00:08:45.629 20:26:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.629 20:26:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:45.629 ************************************ 00:08:45.629 END TEST dpdk_mem_utility 00:08:45.629 ************************************ 00:08:45.888 20:26:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:45.888 20:26:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.888 20:26:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.888 20:26:53 -- common/autotest_common.sh@10 -- # set +x 00:08:45.888 ************************************ 00:08:45.888 START TEST event 00:08:45.888 ************************************ 00:08:45.888 20:26:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:45.888 * Looking for test storage... 00:08:45.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:45.888 20:26:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.888 20:26:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.888 20:26:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.147 20:26:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.147 20:26:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.147 20:26:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.147 20:26:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.147 20:26:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.147 20:26:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.147 20:26:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.147 20:26:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.147 20:26:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.147 20:26:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.147 20:26:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.147 20:26:54 event -- scripts/common.sh@344 -- # case "$op" in 00:08:46.147 20:26:54 event -- scripts/common.sh@345 -- # : 1 00:08:46.147 20:26:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.147 20:26:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.147 20:26:54 event -- scripts/common.sh@365 -- # decimal 1 00:08:46.147 20:26:54 event -- scripts/common.sh@353 -- # local d=1 00:08:46.147 20:26:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.147 20:26:54 event -- scripts/common.sh@355 -- # echo 1 00:08:46.147 20:26:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.147 20:26:54 event -- scripts/common.sh@366 -- # decimal 2 00:08:46.147 20:26:54 event -- scripts/common.sh@353 -- # local d=2 00:08:46.147 20:26:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.147 20:26:54 event -- scripts/common.sh@355 -- # echo 2 00:08:46.147 20:26:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.147 20:26:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.147 20:26:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.147 20:26:54 event -- scripts/common.sh@368 -- # return 0 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.147 --rc genhtml_branch_coverage=1 00:08:46.147 --rc genhtml_function_coverage=1 00:08:46.147 --rc genhtml_legend=1 00:08:46.147 --rc geninfo_all_blocks=1 00:08:46.147 --rc geninfo_unexecuted_blocks=1 00:08:46.147 00:08:46.147 ' 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.147 --rc genhtml_branch_coverage=1 00:08:46.147 --rc genhtml_function_coverage=1 00:08:46.147 --rc genhtml_legend=1 00:08:46.147 --rc geninfo_all_blocks=1 00:08:46.147 --rc geninfo_unexecuted_blocks=1 00:08:46.147 00:08:46.147 ' 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.147 --rc genhtml_branch_coverage=1 00:08:46.147 --rc genhtml_function_coverage=1 00:08:46.147 --rc genhtml_legend=1 00:08:46.147 --rc geninfo_all_blocks=1 00:08:46.147 --rc geninfo_unexecuted_blocks=1 00:08:46.147 00:08:46.147 ' 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.147 --rc genhtml_branch_coverage=1 00:08:46.147 --rc genhtml_function_coverage=1 00:08:46.147 --rc genhtml_legend=1 00:08:46.147 --rc geninfo_all_blocks=1 00:08:46.147 --rc geninfo_unexecuted_blocks=1 00:08:46.147 00:08:46.147 ' 00:08:46.147 20:26:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:46.147 20:26:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:46.147 20:26:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:46.147 20:26:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.147 20:26:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.147 ************************************ 00:08:46.147 START TEST event_perf 00:08:46.147 ************************************ 00:08:46.147 20:26:54 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:46.147 Running I/O for 1 seconds...[2024-11-25 20:26:54.135439] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:46.147 [2024-11-25 20:26:54.135563] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:08:46.406 [2024-11-25 20:26:54.323684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.406 [2024-11-25 20:26:54.457714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.406 [2024-11-25 20:26:54.457863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.406 Running I/O for 1 seconds...[2024-11-25 20:26:54.457965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.406 [2024-11-25 20:26:54.457990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.781 00:08:47.781 lcore 0: 103643 00:08:47.781 lcore 1: 103639 00:08:47.781 lcore 2: 103642 00:08:47.781 lcore 3: 103643 00:08:47.781 done. 00:08:47.781 00:08:47.781 real 0m1.638s 00:08:47.781 user 0m4.381s 00:08:47.781 sys 0m0.128s 00:08:47.781 20:26:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.781 20:26:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:47.781 ************************************ 00:08:47.781 END TEST event_perf 00:08:47.781 ************************************ 00:08:47.781 20:26:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:47.781 20:26:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:47.781 20:26:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.781 20:26:55 event -- common/autotest_common.sh@10 -- # set +x 00:08:47.781 ************************************ 00:08:47.781 START TEST event_reactor 00:08:47.781 ************************************ 00:08:47.781 20:26:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:47.781 [2024-11-25 20:26:55.823239] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:47.781 [2024-11-25 20:26:55.823677] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:08:48.053 [2024-11-25 20:26:56.016423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.341 [2024-11-25 20:26:56.176044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.713 test_start 00:08:49.713 oneshot 00:08:49.713 tick 100 00:08:49.713 tick 100 00:08:49.713 tick 250 00:08:49.713 tick 100 00:08:49.713 tick 100 00:08:49.713 tick 100 00:08:49.713 tick 250 00:08:49.713 tick 500 00:08:49.713 tick 100 00:08:49.713 tick 100 00:08:49.714 tick 250 00:08:49.714 tick 100 00:08:49.714 tick 100 00:08:49.714 test_end 00:08:49.714 00:08:49.714 real 0m1.642s 00:08:49.714 user 0m1.417s 00:08:49.714 sys 0m0.113s 00:08:49.714 ************************************ 00:08:49.714 END TEST event_reactor 00:08:49.714 ************************************ 00:08:49.714 20:26:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.714 20:26:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:49.714 20:26:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:49.714 20:26:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:49.714 20:26:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.714 20:26:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.714 ************************************ 00:08:49.714 START TEST event_reactor_perf 00:08:49.714 ************************************ 00:08:49.714 20:26:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:49.714 [2024-11-25 20:26:57.504456] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:49.714 [2024-11-25 20:26:57.504614] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:08:49.714 [2024-11-25 20:26:57.675639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.971 [2024-11-25 20:26:57.882158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.341 test_start 00:08:51.341 test_end 00:08:51.341 Performance: 319684 events per second 00:08:51.341 00:08:51.341 real 0m1.715s 00:08:51.341 user 0m1.470s 00:08:51.341 sys 0m0.132s 00:08:51.341 ************************************ 00:08:51.341 END TEST event_reactor_perf 00:08:51.341 ************************************ 00:08:51.341 20:26:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.341 20:26:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:51.341 20:26:59 event -- event/event.sh@49 -- # uname -s 00:08:51.341 20:26:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:51.341 20:26:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:51.341 20:26:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.341 20:26:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.341 20:26:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.341 ************************************ 00:08:51.341 START TEST event_scheduler 00:08:51.341 ************************************ 00:08:51.341 20:26:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:51.341 * Looking for test storage... 00:08:51.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:51.341 20:26:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.341 20:26:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.342 20:26:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.626 20:26:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:51.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.626 --rc genhtml_branch_coverage=1 00:08:51.626 --rc genhtml_function_coverage=1 00:08:51.626 --rc genhtml_legend=1 00:08:51.626 --rc geninfo_all_blocks=1 00:08:51.626 --rc geninfo_unexecuted_blocks=1 00:08:51.626 00:08:51.626 ' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:51.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.626 --rc genhtml_branch_coverage=1 00:08:51.626 --rc genhtml_function_coverage=1 00:08:51.626 --rc genhtml_legend=1 00:08:51.626 --rc geninfo_all_blocks=1 00:08:51.626 --rc geninfo_unexecuted_blocks=1 00:08:51.626 00:08:51.626 ' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:51.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.626 --rc genhtml_branch_coverage=1 00:08:51.626 --rc genhtml_function_coverage=1 00:08:51.626 --rc genhtml_legend=1 00:08:51.626 --rc geninfo_all_blocks=1 00:08:51.626 --rc geninfo_unexecuted_blocks=1 00:08:51.626 00:08:51.626 ' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:51.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.626 --rc genhtml_branch_coverage=1 00:08:51.626 --rc genhtml_function_coverage=1 00:08:51.626 --rc genhtml_legend=1 00:08:51.626 --rc geninfo_all_blocks=1 00:08:51.626 --rc geninfo_unexecuted_blocks=1 00:08:51.626 00:08:51.626 ' 00:08:51.626 20:26:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:51.626 20:26:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59329 00:08:51.626 20:26:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:51.626 20:26:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.626 20:26:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59329 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59329 ']' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.626 20:26:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:51.626 [2024-11-25 20:26:59.641454] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:51.626 [2024-11-25 20:26:59.641778] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:08:51.883 [2024-11-25 20:26:59.826833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.883 [2024-11-25 20:26:59.963679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.883 [2024-11-25 20:26:59.963827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.883 [2024-11-25 20:26:59.963942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.883 [2024-11-25 20:26:59.963980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:52.447 20:27:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:52.447 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.447 POWER: Cannot set governor of lcore 0 to userspace 00:08:52.447 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.447 POWER: Cannot set governor of lcore 0 to performance 00:08:52.447 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.447 POWER: Cannot set governor of lcore 0 to userspace 00:08:52.447 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:52.447 POWER: Cannot set governor of lcore 0 to userspace 00:08:52.447 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:52.447 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:52.447 POWER: Unable to set Power Management Environment for lcore 0 00:08:52.447 [2024-11-25 20:27:00.534935] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:52.447 [2024-11-25 20:27:00.534968] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:52.447 [2024-11-25 20:27:00.534983] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:52.447 [2024-11-25 20:27:00.535006] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:52.447 [2024-11-25 20:27:00.535018] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:52.447 [2024-11-25 20:27:00.535031] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.447 20:27:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.447 20:27:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 [2024-11-25 20:27:00.871970] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:53.013 20:27:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:53.013 20:27:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.013 20:27:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 ************************************ 00:08:53.013 START TEST scheduler_create_thread 00:08:53.013 ************************************ 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 2 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 3 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 4 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 5 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 6 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 7 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 8 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 9 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 10 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 20:27:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.013 20:27:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:53.013 20:27:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:53.013 20:27:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.013 20:27:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.949 20:27:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.949 20:27:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:53.949 20:27:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.949 20:27:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.338 20:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.338 20:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:55.338 20:27:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:55.338 20:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.338 20:27:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:56.272 ************************************ 00:08:56.272 END TEST scheduler_create_thread 00:08:56.272 ************************************ 00:08:56.272 20:27:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.272 00:08:56.272 real 0m3.381s 00:08:56.272 user 0m0.031s 00:08:56.272 sys 0m0.012s 00:08:56.272 20:27:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.273 20:27:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:56.273 20:27:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:56.273 20:27:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59329 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59329 ']' 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59329 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59329 00:08:56.273 killing process with pid 59329 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59329' 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59329 00:08:56.273 20:27:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59329 00:08:56.530 [2024-11-25 20:27:04.650461] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:58.428 00:08:58.428 real 0m6.821s 00:08:58.428 user 0m14.427s 00:08:58.428 sys 0m0.581s 00:08:58.428 20:27:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.428 20:27:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 ************************************ 00:08:58.428 END TEST event_scheduler 00:08:58.428 ************************************ 00:08:58.428 20:27:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:58.428 20:27:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:58.428 20:27:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.428 20:27:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.428 20:27:06 event -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 ************************************ 00:08:58.428 START TEST app_repeat 00:08:58.428 ************************************ 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59459 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:58.428 Process app_repeat pid: 59459 00:08:58.428 spdk_app_start Round 0 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59459' 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:58.428 20:27:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59459 /var/tmp/spdk-nbd.sock 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59459 ']' 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:58.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.428 20:27:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:58.428 [2024-11-25 20:27:06.244559] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:08:58.428 [2024-11-25 20:27:06.244794] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59459 ] 00:08:58.428 [2024-11-25 20:27:06.436167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.429 [2024-11-25 20:27:06.558889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.429 [2024-11-25 20:27:06.558932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.403 20:27:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.403 20:27:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:59.403 20:27:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:59.403 Malloc0 00:08:59.662 20:27:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:59.920 Malloc1 00:08:59.920 20:27:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:59.920 20:27:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:00.178 /dev/nbd0 00:09:00.178 20:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:00.178 20:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:00.178 1+0 records in 00:09:00.178 1+0 records out 00:09:00.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471294 s, 8.7 MB/s 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.178 20:27:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:00.178 20:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.178 20:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.178 20:27:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:00.437 /dev/nbd1 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:00.437 1+0 records in 00:09:00.437 1+0 records out 00:09:00.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427292 s, 9.6 MB/s 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.437 20:27:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.437 20:27:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:00.696 { 00:09:00.696 "nbd_device": "/dev/nbd0", 00:09:00.696 "bdev_name": "Malloc0" 00:09:00.696 }, 00:09:00.696 { 00:09:00.696 "nbd_device": "/dev/nbd1", 00:09:00.696 "bdev_name": "Malloc1" 00:09:00.696 } 00:09:00.696 ]' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:00.696 { 00:09:00.696 "nbd_device": "/dev/nbd0", 00:09:00.696 "bdev_name": "Malloc0" 00:09:00.696 }, 00:09:00.696 { 00:09:00.696 "nbd_device": "/dev/nbd1", 00:09:00.696 "bdev_name": "Malloc1" 00:09:00.696 } 00:09:00.696 ]' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:00.696 /dev/nbd1' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:00.696 /dev/nbd1' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:00.696 256+0 records in 00:09:00.696 256+0 records out 00:09:00.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118822 s, 88.2 MB/s 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.696 20:27:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:00.955 256+0 records in 00:09:00.955 256+0 records out 00:09:00.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350879 s, 29.9 MB/s 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:00.955 256+0 records in 00:09:00.955 256+0 records out 00:09:00.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0334401 s, 31.4 MB/s 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.955 20:27:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.212 20:27:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.471 20:27:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:01.730 20:27:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:01.730 20:27:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:02.319 20:27:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:03.254 [2024-11-25 20:27:11.329090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:03.512 [2024-11-25 20:27:11.447658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.512 [2024-11-25 20:27:11.447657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.512 [2024-11-25 20:27:11.635403] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:03.512 [2024-11-25 20:27:11.635485] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:05.447 spdk_app_start Round 1 00:09:05.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:05.447 20:27:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:05.447 20:27:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:05.447 20:27:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59459 /var/tmp/spdk-nbd.sock 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59459 ']' 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.447 20:27:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:05.447 20:27:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.707 Malloc0 00:09:05.707 20:27:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:05.966 Malloc1 00:09:05.966 20:27:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.966 20:27:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:06.225 /dev/nbd0 00:09:06.225 20:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:06.225 20:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:06.225 20:27:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:06.225 1+0 records in 00:09:06.225 1+0 records out 00:09:06.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395094 s, 10.4 MB/s 00:09:06.226 20:27:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:06.226 20:27:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:06.226 20:27:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:06.226 20:27:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:06.226 20:27:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:06.226 20:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.226 20:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.226 20:27:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:06.484 /dev/nbd1 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:06.484 1+0 records in 00:09:06.484 1+0 records out 00:09:06.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730637 s, 5.6 MB/s 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:06.484 20:27:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.484 20:27:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:06.742 { 00:09:06.742 "nbd_device": "/dev/nbd0", 00:09:06.742 "bdev_name": "Malloc0" 00:09:06.742 }, 00:09:06.742 { 00:09:06.742 "nbd_device": "/dev/nbd1", 00:09:06.742 "bdev_name": "Malloc1" 00:09:06.742 } 00:09:06.742 ]' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:06.742 { 00:09:06.742 "nbd_device": "/dev/nbd0", 00:09:06.742 "bdev_name": "Malloc0" 00:09:06.742 }, 00:09:06.742 { 00:09:06.742 "nbd_device": "/dev/nbd1", 00:09:06.742 "bdev_name": "Malloc1" 00:09:06.742 } 00:09:06.742 ]' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:06.742 /dev/nbd1' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:06.742 /dev/nbd1' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:06.742 20:27:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:07.000 256+0 records in 00:09:07.000 256+0 records out 00:09:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130397 s, 80.4 MB/s 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:07.000 256+0 records in 00:09:07.000 256+0 records out 00:09:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308376 s, 34.0 MB/s 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:07.000 256+0 records in 00:09:07.000 256+0 records out 00:09:07.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332188 s, 31.6 MB/s 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.000 20:27:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.258 20:27:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:07.515 20:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:07.515 20:27:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:07.515 20:27:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:07.515 20:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.515 20:27:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.515 20:27:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:07.516 20:27:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:07.516 20:27:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.516 20:27:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.516 20:27:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.516 20:27:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:07.774 20:27:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:07.774 20:27:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:08.357 20:27:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:09.292 [2024-11-25 20:27:17.353870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.550 [2024-11-25 20:27:17.472047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.550 [2024-11-25 20:27:17.472069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.550 [2024-11-25 20:27:17.672022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:09.550 [2024-11-25 20:27:17.672086] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:11.453 spdk_app_start Round 2 00:09:11.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:11.453 20:27:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:11.453 20:27:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:11.453 20:27:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59459 /var/tmp/spdk-nbd.sock 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59459 ']' 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.453 20:27:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:11.453 20:27:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.712 Malloc0 00:09:11.712 20:27:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.971 Malloc1 00:09:11.971 20:27:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.971 20:27:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.971 20:27:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.971 20:27:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:11.971 20:27:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.971 20:27:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.972 20:27:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:12.231 /dev/nbd0 00:09:12.231 20:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:12.231 20:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.231 1+0 records in 00:09:12.231 1+0 records out 00:09:12.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224594 s, 18.2 MB/s 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.231 20:27:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:12.231 20:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.231 20:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.231 20:27:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:12.491 /dev/nbd1 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.491 1+0 records in 00:09:12.491 1+0 records out 00:09:12.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365108 s, 11.2 MB/s 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.491 20:27:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.491 20:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.751 20:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.751 { 00:09:12.751 "nbd_device": "/dev/nbd0", 00:09:12.751 "bdev_name": "Malloc0" 00:09:12.751 }, 00:09:12.751 { 00:09:12.751 "nbd_device": "/dev/nbd1", 00:09:12.751 "bdev_name": "Malloc1" 00:09:12.751 } 00:09:12.751 ]' 00:09:12.751 20:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.751 20:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.751 { 00:09:12.751 "nbd_device": "/dev/nbd0", 00:09:12.751 "bdev_name": "Malloc0" 00:09:12.751 }, 00:09:12.751 { 00:09:12.751 "nbd_device": "/dev/nbd1", 00:09:12.751 "bdev_name": "Malloc1" 00:09:12.751 } 00:09:12.751 ]' 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:13.009 /dev/nbd1' 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:13.009 /dev/nbd1' 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:13.009 256+0 records in 00:09:13.009 256+0 records out 00:09:13.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115202 s, 91.0 MB/s 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:13.009 256+0 records in 00:09:13.009 256+0 records out 00:09:13.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289876 s, 36.2 MB/s 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:13.009 256+0 records in 00:09:13.009 256+0 records out 00:09:13.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0362132 s, 29.0 MB/s 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:13.009 20:27:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.009 20:27:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:13.268 20:27:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.527 20:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.785 20:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.785 20:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.785 20:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:14.043 20:27:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:14.043 20:27:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:14.301 20:27:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:15.677 [2024-11-25 20:27:23.616216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.677 [2024-11-25 20:27:23.741307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.677 [2024-11-25 20:27:23.741308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.936 [2024-11-25 20:27:23.954558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:15.936 [2024-11-25 20:27:23.954667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:17.311 20:27:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59459 /var/tmp/spdk-nbd.sock 00:09:17.311 20:27:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59459 ']' 00:09:17.311 20:27:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:17.311 20:27:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:17.311 20:27:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:17.311 20:27:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.311 20:27:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:17.572 20:27:25 event.app_repeat -- event/event.sh@39 -- # killprocess 59459 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59459 ']' 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59459 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59459 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.572 killing process with pid 59459 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59459' 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59459 00:09:17.572 20:27:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59459 00:09:18.949 spdk_app_start is called in Round 0. 00:09:18.949 Shutdown signal received, stop current app iteration 00:09:18.949 Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 reinitialization... 00:09:18.949 spdk_app_start is called in Round 1. 00:09:18.949 Shutdown signal received, stop current app iteration 00:09:18.949 Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 reinitialization... 00:09:18.949 spdk_app_start is called in Round 2. 00:09:18.949 Shutdown signal received, stop current app iteration 00:09:18.949 Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 reinitialization... 00:09:18.949 spdk_app_start is called in Round 3. 00:09:18.949 Shutdown signal received, stop current app iteration 00:09:18.949 20:27:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:18.949 20:27:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:18.949 00:09:18.949 real 0m20.563s 00:09:18.949 user 0m44.114s 00:09:18.949 sys 0m3.510s 00:09:18.949 20:27:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.949 ************************************ 00:09:18.949 END TEST app_repeat 00:09:18.949 20:27:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:18.949 ************************************ 00:09:18.949 20:27:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:18.949 20:27:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:18.949 20:27:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.949 20:27:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.949 20:27:26 event -- common/autotest_common.sh@10 -- # set +x 00:09:18.949 ************************************ 00:09:18.949 START TEST cpu_locks 00:09:18.949 ************************************ 00:09:18.949 20:27:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:18.949 * Looking for test storage... 00:09:18.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:18.949 20:27:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.949 20:27:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.949 20:27:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.949 20:27:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.949 20:27:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:18.949 20:27:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.949 20:27:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.949 --rc genhtml_branch_coverage=1 00:09:18.949 --rc genhtml_function_coverage=1 00:09:18.949 --rc genhtml_legend=1 00:09:18.949 --rc geninfo_all_blocks=1 00:09:18.949 --rc geninfo_unexecuted_blocks=1 00:09:18.949 00:09:18.949 ' 00:09:18.949 20:27:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.949 --rc genhtml_branch_coverage=1 00:09:18.949 --rc genhtml_function_coverage=1 00:09:18.949 --rc genhtml_legend=1 00:09:18.949 --rc geninfo_all_blocks=1 00:09:18.949 --rc geninfo_unexecuted_blocks=1 00:09:18.949 00:09:18.949 ' 00:09:18.949 20:27:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.949 --rc genhtml_branch_coverage=1 00:09:18.949 --rc genhtml_function_coverage=1 00:09:18.949 --rc genhtml_legend=1 00:09:18.949 --rc geninfo_all_blocks=1 00:09:18.949 --rc geninfo_unexecuted_blocks=1 00:09:18.949 00:09:18.949 ' 00:09:18.949 20:27:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.949 --rc genhtml_branch_coverage=1 00:09:18.949 --rc genhtml_function_coverage=1 00:09:18.950 --rc genhtml_legend=1 00:09:18.950 --rc geninfo_all_blocks=1 00:09:18.950 --rc geninfo_unexecuted_blocks=1 00:09:18.950 00:09:18.950 ' 00:09:18.950 20:27:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:18.950 20:27:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:18.950 20:27:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:18.950 20:27:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:18.950 20:27:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.950 20:27:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.950 20:27:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.950 ************************************ 00:09:18.950 START TEST default_locks 00:09:18.950 ************************************ 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59917 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59917 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59917 ']' 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.950 20:27:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.209 [2024-11-25 20:27:27.154729] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:19.209 [2024-11-25 20:27:27.154862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:09:19.468 [2024-11-25 20:27:27.344506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.468 [2024-11-25 20:27:27.466618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.405 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.405 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:20.405 20:27:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59917 00:09:20.405 20:27:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59917 00:09:20.405 20:27:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:20.971 20:27:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59917 00:09:20.971 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59917 ']' 00:09:20.971 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59917 00:09:20.971 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:20.971 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.971 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59917 00:09:20.972 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.972 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.972 killing process with pid 59917 00:09:20.972 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59917' 00:09:20.972 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59917 00:09:20.972 20:27:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59917 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59917 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59917 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59917 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59917 ']' 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.504 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.505 ERROR: process (pid: 59917) is no longer running 00:09:23.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59917) - No such process 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:23.505 00:09:23.505 real 0m4.372s 00:09:23.505 user 0m4.388s 00:09:23.505 sys 0m0.720s 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.505 ************************************ 00:09:23.505 END TEST default_locks 00:09:23.505 ************************************ 00:09:23.505 20:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.505 20:27:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:23.505 20:27:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.505 20:27:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.505 20:27:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.505 ************************************ 00:09:23.505 START TEST default_locks_via_rpc 00:09:23.505 ************************************ 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59997 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59997 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59997 ']' 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.505 20:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.505 [2024-11-25 20:27:31.630223] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:23.505 [2024-11-25 20:27:31.630398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59997 ] 00:09:23.764 [2024-11-25 20:27:31.819563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.023 [2024-11-25 20:27:31.944217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59997 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59997 00:09:24.964 20:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59997 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59997 ']' 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59997 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59997 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.224 killing process with pid 59997 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59997' 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59997 00:09:25.224 20:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59997 00:09:27.756 00:09:27.756 real 0m4.324s 00:09:27.756 user 0m4.231s 00:09:27.756 sys 0m0.730s 00:09:27.756 ************************************ 00:09:27.756 END TEST default_locks_via_rpc 00:09:27.756 ************************************ 00:09:27.756 20:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.756 20:27:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.756 20:27:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:27.756 20:27:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.756 20:27:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.756 20:27:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.756 ************************************ 00:09:27.756 START TEST non_locking_app_on_locked_coremask 00:09:27.756 ************************************ 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60071 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60071 /var/tmp/spdk.sock 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60071 ']' 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.756 20:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:28.022 [2024-11-25 20:27:35.991831] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:28.022 [2024-11-25 20:27:35.991970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:09:28.304 [2024-11-25 20:27:36.181370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.304 [2024-11-25 20:27:36.307447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60093 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60093 /var/tmp/spdk2.sock 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60093 ']' 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.237 20:27:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.237 [2024-11-25 20:27:37.350720] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:29.237 [2024-11-25 20:27:37.350900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60093 ] 00:09:29.495 [2024-11-25 20:27:37.543144] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:29.495 [2024-11-25 20:27:37.543239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.752 [2024-11-25 20:27:37.784081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.278 20:27:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.278 20:27:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:32.278 20:27:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60071 00:09:32.278 20:27:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60071 00:09:32.278 20:27:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:32.846 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60071 00:09:32.846 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60071 ']' 00:09:32.846 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60071 00:09:32.846 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:32.846 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.846 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60071 00:09:33.105 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.105 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.105 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60071' 00:09:33.105 killing process with pid 60071 00:09:33.105 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60071 00:09:33.105 20:27:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60071 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60093 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60093 ']' 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60093 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60093 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.379 killing process with pid 60093 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60093' 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60093 00:09:38.379 20:27:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60093 00:09:40.278 00:09:40.278 real 0m12.452s 00:09:40.278 user 0m12.826s 00:09:40.278 sys 0m1.488s 00:09:40.278 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.278 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.278 ************************************ 00:09:40.278 END TEST non_locking_app_on_locked_coremask 00:09:40.278 ************************************ 00:09:40.278 20:27:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:40.278 20:27:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.278 20:27:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.278 20:27:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:40.278 ************************************ 00:09:40.278 START TEST locking_app_on_unlocked_coremask 00:09:40.278 ************************************ 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60251 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60251 /var/tmp/spdk.sock 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60251 ']' 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.278 20:27:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:40.534 [2024-11-25 20:27:48.496550] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:40.534 [2024-11-25 20:27:48.496732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60251 ] 00:09:40.789 [2024-11-25 20:27:48.687529] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:40.789 [2024-11-25 20:27:48.687615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.789 [2024-11-25 20:27:48.815507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60268 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60268 /var/tmp/spdk2.sock 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60268 ']' 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.720 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.977 [2024-11-25 20:27:49.906115] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:41.977 [2024-11-25 20:27:49.906303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60268 ] 00:09:41.977 [2024-11-25 20:27:50.102840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.236 [2024-11-25 20:27:50.358118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.774 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.774 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:44.774 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60268 00:09:44.774 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60268 00:09:44.775 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60251 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60251 ']' 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60251 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60251 00:09:45.343 killing process with pid 60251 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60251' 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60251 00:09:45.343 20:27:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60251 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60268 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60268 ']' 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60268 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60268 00:09:50.615 killing process with pid 60268 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60268' 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60268 00:09:50.615 20:27:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60268 00:09:52.517 00:09:52.517 real 0m12.169s 00:09:52.517 user 0m12.576s 00:09:52.517 sys 0m1.416s 00:09:52.517 ************************************ 00:09:52.517 END TEST locking_app_on_unlocked_coremask 00:09:52.517 ************************************ 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.517 20:28:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:52.517 20:28:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.517 20:28:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.517 20:28:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.517 ************************************ 00:09:52.517 START TEST locking_app_on_locked_coremask 00:09:52.517 ************************************ 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60417 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60417 /var/tmp/spdk.sock 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60417 ']' 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.517 20:28:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.776 [2024-11-25 20:28:00.747160] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:52.776 [2024-11-25 20:28:00.747522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60417 ] 00:09:53.034 [2024-11-25 20:28:00.935130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.034 [2024-11-25 20:28:01.053105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60439 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60439 /var/tmp/spdk2.sock 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60439 /var/tmp/spdk2.sock 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60439 /var/tmp/spdk2.sock 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60439 ']' 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.970 20:28:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:53.970 [2024-11-25 20:28:02.040552] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:53.970 [2024-11-25 20:28:02.040708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:09:54.228 [2024-11-25 20:28:02.239142] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60417 has claimed it. 00:09:54.228 [2024-11-25 20:28:02.239212] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:54.794 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60439) - No such process 00:09:54.794 ERROR: process (pid: 60439) is no longer running 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60417 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60417 00:09:54.794 20:28:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:55.053 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60417 00:09:55.053 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60417 ']' 00:09:55.053 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60417 00:09:55.053 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:55.053 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.053 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60417 00:09:55.312 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.312 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.312 killing process with pid 60417 00:09:55.312 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60417' 00:09:55.312 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60417 00:09:55.312 20:28:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60417 00:09:57.923 00:09:57.923 real 0m4.983s 00:09:57.923 user 0m5.154s 00:09:57.923 sys 0m0.917s 00:09:57.923 20:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.923 ************************************ 00:09:57.923 END TEST locking_app_on_locked_coremask 00:09:57.923 ************************************ 00:09:57.923 20:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.923 20:28:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:57.923 20:28:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.923 20:28:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.923 20:28:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.923 ************************************ 00:09:57.923 START TEST locking_overlapped_coremask 00:09:57.923 ************************************ 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60508 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60508 /var/tmp/spdk.sock 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60508 ']' 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.923 20:28:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.923 [2024-11-25 20:28:05.794888] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:57.923 [2024-11-25 20:28:05.795600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60508 ] 00:09:57.923 [2024-11-25 20:28:05.973746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.182 [2024-11-25 20:28:06.097031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.182 [2024-11-25 20:28:06.097143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.182 [2024-11-25 20:28:06.097174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60532 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60532 /var/tmp/spdk2.sock 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60532 /var/tmp/spdk2.sock 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60532 /var/tmp/spdk2.sock 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60532 ']' 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:59.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.118 20:28:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.118 [2024-11-25 20:28:07.118532] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:09:59.118 [2024-11-25 20:28:07.118664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60532 ] 00:09:59.376 [2024-11-25 20:28:07.306190] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60508 has claimed it. 00:09:59.376 [2024-11-25 20:28:07.306267] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:59.636 ERROR: process (pid: 60532) is no longer running 00:09:59.636 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60532) - No such process 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60508 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60508 ']' 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60508 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.636 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60508 00:09:59.894 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.894 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.894 killing process with pid 60508 00:09:59.894 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60508' 00:09:59.894 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60508 00:09:59.894 20:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60508 00:10:02.426 00:10:02.426 real 0m4.565s 00:10:02.426 user 0m12.367s 00:10:02.426 sys 0m0.678s 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.426 ************************************ 00:10:02.426 END TEST locking_overlapped_coremask 00:10:02.426 ************************************ 00:10:02.426 20:28:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:02.426 20:28:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.426 20:28:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.426 20:28:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.426 ************************************ 00:10:02.426 START TEST locking_overlapped_coremask_via_rpc 00:10:02.426 ************************************ 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60596 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60596 /var/tmp/spdk.sock 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60596 ']' 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.426 20:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.426 [2024-11-25 20:28:10.438180] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:02.426 [2024-11-25 20:28:10.438315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:10:02.685 [2024-11-25 20:28:10.615310] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:02.685 [2024-11-25 20:28:10.615398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.685 [2024-11-25 20:28:10.735695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.685 [2024-11-25 20:28:10.735868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.685 [2024-11-25 20:28:10.735898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60614 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60614 /var/tmp/spdk2.sock 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60614 ']' 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.623 20:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.882 [2024-11-25 20:28:11.778635] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:03.882 [2024-11-25 20:28:11.778768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60614 ] 00:10:03.882 [2024-11-25 20:28:11.965421] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:03.882 [2024-11-25 20:28:11.965487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.141 [2024-11-25 20:28:12.224257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.141 [2024-11-25 20:28:12.224416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.141 [2024-11-25 20:28:12.224448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.827 [2024-11-25 20:28:14.529561] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60596 has claimed it. 00:10:06.827 request: 00:10:06.827 { 00:10:06.827 "method": "framework_enable_cpumask_locks", 00:10:06.827 "req_id": 1 00:10:06.827 } 00:10:06.827 Got JSON-RPC error response 00:10:06.827 response: 00:10:06.827 { 00:10:06.827 "code": -32603, 00:10:06.827 "message": "Failed to claim CPU core: 2" 00:10:06.827 } 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60596 /var/tmp/spdk.sock 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60596 ']' 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60614 /var/tmp/spdk2.sock 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60614 ']' 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.827 20:28:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:07.087 00:10:07.087 real 0m4.696s 00:10:07.087 user 0m1.499s 00:10:07.087 sys 0m0.260s 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.087 20:28:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.087 ************************************ 00:10:07.087 END TEST locking_overlapped_coremask_via_rpc 00:10:07.087 ************************************ 00:10:07.087 20:28:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:07.087 20:28:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60596 ]] 00:10:07.087 20:28:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60596 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60596 ']' 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60596 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60596 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60596' 00:10:07.087 killing process with pid 60596 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60596 00:10:07.087 20:28:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60596 00:10:09.624 20:28:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60614 ]] 00:10:09.624 20:28:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60614 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60614 ']' 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60614 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60614 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:09.624 killing process with pid 60614 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60614' 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60614 00:10:09.624 20:28:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60614 00:10:12.909 20:28:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:12.909 20:28:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:12.909 20:28:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60596 ]] 00:10:12.909 20:28:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60596 00:10:12.909 20:28:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60596 ']' 00:10:12.909 20:28:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60596 00:10:12.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60596) - No such process 00:10:12.909 Process with pid 60596 is not found 00:10:12.909 20:28:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60596 is not found' 00:10:12.909 20:28:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60614 ]] 00:10:12.909 20:28:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60614 00:10:12.910 20:28:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60614 ']' 00:10:12.910 20:28:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60614 00:10:12.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60614) - No such process 00:10:12.910 Process with pid 60614 is not found 00:10:12.910 20:28:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60614 is not found' 00:10:12.910 20:28:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:12.910 00:10:12.910 real 0m53.676s 00:10:12.910 user 1m31.717s 00:10:12.910 sys 0m7.555s 00:10:12.910 20:28:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.910 20:28:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 ************************************ 00:10:12.910 END TEST cpu_locks 00:10:12.910 ************************************ 00:10:12.910 00:10:12.910 real 1m26.730s 00:10:12.910 user 2m37.777s 00:10:12.910 sys 0m12.431s 00:10:12.910 20:28:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.910 20:28:20 event -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 ************************************ 00:10:12.910 END TEST event 00:10:12.910 ************************************ 00:10:12.910 20:28:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:12.910 20:28:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.910 20:28:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.910 20:28:20 -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 ************************************ 00:10:12.910 START TEST thread 00:10:12.910 ************************************ 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:12.910 * Looking for test storage... 00:10:12.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.910 20:28:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.910 20:28:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.910 20:28:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.910 20:28:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.910 20:28:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.910 20:28:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.910 20:28:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.910 20:28:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.910 20:28:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.910 20:28:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.910 20:28:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.910 20:28:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:12.910 20:28:20 thread -- scripts/common.sh@345 -- # : 1 00:10:12.910 20:28:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.910 20:28:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.910 20:28:20 thread -- scripts/common.sh@365 -- # decimal 1 00:10:12.910 20:28:20 thread -- scripts/common.sh@353 -- # local d=1 00:10:12.910 20:28:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.910 20:28:20 thread -- scripts/common.sh@355 -- # echo 1 00:10:12.910 20:28:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.910 20:28:20 thread -- scripts/common.sh@366 -- # decimal 2 00:10:12.910 20:28:20 thread -- scripts/common.sh@353 -- # local d=2 00:10:12.910 20:28:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.910 20:28:20 thread -- scripts/common.sh@355 -- # echo 2 00:10:12.910 20:28:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.910 20:28:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.910 20:28:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.910 20:28:20 thread -- scripts/common.sh@368 -- # return 0 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.910 --rc genhtml_branch_coverage=1 00:10:12.910 --rc genhtml_function_coverage=1 00:10:12.910 --rc genhtml_legend=1 00:10:12.910 --rc geninfo_all_blocks=1 00:10:12.910 --rc geninfo_unexecuted_blocks=1 00:10:12.910 00:10:12.910 ' 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.910 --rc genhtml_branch_coverage=1 00:10:12.910 --rc genhtml_function_coverage=1 00:10:12.910 --rc genhtml_legend=1 00:10:12.910 --rc geninfo_all_blocks=1 00:10:12.910 --rc geninfo_unexecuted_blocks=1 00:10:12.910 00:10:12.910 ' 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.910 --rc genhtml_branch_coverage=1 00:10:12.910 --rc genhtml_function_coverage=1 00:10:12.910 --rc genhtml_legend=1 00:10:12.910 --rc geninfo_all_blocks=1 00:10:12.910 --rc geninfo_unexecuted_blocks=1 00:10:12.910 00:10:12.910 ' 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.910 --rc genhtml_branch_coverage=1 00:10:12.910 --rc genhtml_function_coverage=1 00:10:12.910 --rc genhtml_legend=1 00:10:12.910 --rc geninfo_all_blocks=1 00:10:12.910 --rc geninfo_unexecuted_blocks=1 00:10:12.910 00:10:12.910 ' 00:10:12.910 20:28:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.910 20:28:20 thread -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 ************************************ 00:10:12.910 START TEST thread_poller_perf 00:10:12.910 ************************************ 00:10:12.910 20:28:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:12.910 [2024-11-25 20:28:20.892793] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:12.910 [2024-11-25 20:28:20.892916] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60820 ] 00:10:13.169 [2024-11-25 20:28:21.066198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.169 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:13.169 [2024-11-25 20:28:21.210222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.547 [2024-11-25T20:28:22.683Z] ====================================== 00:10:14.547 [2024-11-25T20:28:22.683Z] busy:2499067864 (cyc) 00:10:14.547 [2024-11-25T20:28:22.683Z] total_run_count: 353000 00:10:14.547 [2024-11-25T20:28:22.683Z] tsc_hz: 2490000000 (cyc) 00:10:14.547 [2024-11-25T20:28:22.683Z] ====================================== 00:10:14.547 [2024-11-25T20:28:22.683Z] poller_cost: 7079 (cyc), 2842 (nsec) 00:10:14.547 00:10:14.547 real 0m1.619s 00:10:14.547 user 0m1.383s 00:10:14.547 sys 0m0.120s 00:10:14.547 20:28:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.547 20:28:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:14.547 ************************************ 00:10:14.547 END TEST thread_poller_perf 00:10:14.547 ************************************ 00:10:14.547 20:28:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:14.547 20:28:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:14.547 20:28:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.547 20:28:22 thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.547 ************************************ 00:10:14.548 START TEST thread_poller_perf 00:10:14.548 ************************************ 00:10:14.548 20:28:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:14.548 [2024-11-25 20:28:22.597883] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:14.548 [2024-11-25 20:28:22.598005] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60862 ] 00:10:14.806 [2024-11-25 20:28:22.781877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.806 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:14.806 [2024-11-25 20:28:22.934122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.185 [2024-11-25T20:28:24.321Z] ====================================== 00:10:16.185 [2024-11-25T20:28:24.321Z] busy:2494034084 (cyc) 00:10:16.185 [2024-11-25T20:28:24.321Z] total_run_count: 4993000 00:10:16.185 [2024-11-25T20:28:24.321Z] tsc_hz: 2490000000 (cyc) 00:10:16.185 [2024-11-25T20:28:24.321Z] ====================================== 00:10:16.185 [2024-11-25T20:28:24.321Z] poller_cost: 499 (cyc), 200 (nsec) 00:10:16.185 00:10:16.185 real 0m1.616s 00:10:16.185 user 0m1.383s 00:10:16.185 sys 0m0.125s 00:10:16.185 20:28:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.185 20:28:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:16.185 ************************************ 00:10:16.185 END TEST thread_poller_perf 00:10:16.185 ************************************ 00:10:16.185 20:28:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:16.185 00:10:16.185 real 0m3.612s 00:10:16.185 user 0m2.935s 00:10:16.185 sys 0m0.461s 00:10:16.185 20:28:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.185 20:28:24 thread -- common/autotest_common.sh@10 -- # set +x 00:10:16.185 ************************************ 00:10:16.185 END TEST thread 00:10:16.185 ************************************ 00:10:16.185 20:28:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:16.185 20:28:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:16.185 20:28:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.185 20:28:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.185 20:28:24 -- common/autotest_common.sh@10 -- # set +x 00:10:16.185 ************************************ 00:10:16.185 START TEST app_cmdline 00:10:16.185 ************************************ 00:10:16.185 20:28:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:16.445 * Looking for test storage... 00:10:16.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.445 20:28:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:16.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.445 --rc genhtml_branch_coverage=1 00:10:16.445 --rc genhtml_function_coverage=1 00:10:16.445 --rc genhtml_legend=1 00:10:16.445 --rc geninfo_all_blocks=1 00:10:16.445 --rc geninfo_unexecuted_blocks=1 00:10:16.445 00:10:16.445 ' 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:16.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.445 --rc genhtml_branch_coverage=1 00:10:16.445 --rc genhtml_function_coverage=1 00:10:16.445 --rc genhtml_legend=1 00:10:16.445 --rc geninfo_all_blocks=1 00:10:16.445 --rc geninfo_unexecuted_blocks=1 00:10:16.445 00:10:16.445 ' 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:16.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.445 --rc genhtml_branch_coverage=1 00:10:16.445 --rc genhtml_function_coverage=1 00:10:16.445 --rc genhtml_legend=1 00:10:16.445 --rc geninfo_all_blocks=1 00:10:16.445 --rc geninfo_unexecuted_blocks=1 00:10:16.445 00:10:16.445 ' 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:16.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.445 --rc genhtml_branch_coverage=1 00:10:16.445 --rc genhtml_function_coverage=1 00:10:16.445 --rc genhtml_legend=1 00:10:16.445 --rc geninfo_all_blocks=1 00:10:16.445 --rc geninfo_unexecuted_blocks=1 00:10:16.445 00:10:16.445 ' 00:10:16.445 20:28:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:16.445 20:28:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60946 00:10:16.445 20:28:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:16.445 20:28:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60946 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60946 ']' 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.445 20:28:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:16.704 [2024-11-25 20:28:24.642265] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:16.704 [2024-11-25 20:28:24.642607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:10:16.705 [2024-11-25 20:28:24.831523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.964 [2024-11-25 20:28:24.955230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.900 20:28:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.900 20:28:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:17.900 20:28:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:18.160 { 00:10:18.160 "version": "SPDK v25.01-pre git sha1 d8f6e798d", 00:10:18.160 "fields": { 00:10:18.160 "major": 25, 00:10:18.160 "minor": 1, 00:10:18.160 "patch": 0, 00:10:18.160 "suffix": "-pre", 00:10:18.160 "commit": "d8f6e798d" 00:10:18.160 } 00:10:18.160 } 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:18.160 20:28:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:18.160 20:28:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:18.420 request: 00:10:18.420 { 00:10:18.420 "method": "env_dpdk_get_mem_stats", 00:10:18.420 "req_id": 1 00:10:18.420 } 00:10:18.420 Got JSON-RPC error response 00:10:18.420 response: 00:10:18.420 { 00:10:18.420 "code": -32601, 00:10:18.420 "message": "Method not found" 00:10:18.420 } 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:18.420 20:28:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60946 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60946 ']' 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60946 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60946 00:10:18.420 killing process with pid 60946 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60946' 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 60946 00:10:18.420 20:28:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 60946 00:10:20.970 ************************************ 00:10:20.970 END TEST app_cmdline 00:10:20.970 ************************************ 00:10:20.970 00:10:20.970 real 0m4.522s 00:10:20.970 user 0m4.661s 00:10:20.970 sys 0m0.710s 00:10:20.970 20:28:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.970 20:28:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:20.970 20:28:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:20.970 20:28:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.970 20:28:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.970 20:28:28 -- common/autotest_common.sh@10 -- # set +x 00:10:20.970 ************************************ 00:10:20.970 START TEST version 00:10:20.970 ************************************ 00:10:20.970 20:28:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:20.970 * Looking for test storage... 00:10:20.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:20.970 20:28:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.970 20:28:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.970 20:28:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.229 20:28:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.230 20:28:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.230 20:28:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.230 20:28:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.230 20:28:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.230 20:28:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.230 20:28:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.230 20:28:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.230 20:28:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.230 20:28:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.230 20:28:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.230 20:28:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.230 20:28:29 version -- scripts/common.sh@344 -- # case "$op" in 00:10:21.230 20:28:29 version -- scripts/common.sh@345 -- # : 1 00:10:21.230 20:28:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.230 20:28:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.230 20:28:29 version -- scripts/common.sh@365 -- # decimal 1 00:10:21.230 20:28:29 version -- scripts/common.sh@353 -- # local d=1 00:10:21.230 20:28:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.230 20:28:29 version -- scripts/common.sh@355 -- # echo 1 00:10:21.230 20:28:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.230 20:28:29 version -- scripts/common.sh@366 -- # decimal 2 00:10:21.230 20:28:29 version -- scripts/common.sh@353 -- # local d=2 00:10:21.230 20:28:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.230 20:28:29 version -- scripts/common.sh@355 -- # echo 2 00:10:21.230 20:28:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.230 20:28:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.230 20:28:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.230 20:28:29 version -- scripts/common.sh@368 -- # return 0 00:10:21.230 20:28:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.230 20:28:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.230 --rc genhtml_branch_coverage=1 00:10:21.230 --rc genhtml_function_coverage=1 00:10:21.230 --rc genhtml_legend=1 00:10:21.230 --rc geninfo_all_blocks=1 00:10:21.230 --rc geninfo_unexecuted_blocks=1 00:10:21.230 00:10:21.230 ' 00:10:21.230 20:28:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.230 --rc genhtml_branch_coverage=1 00:10:21.230 --rc genhtml_function_coverage=1 00:10:21.230 --rc genhtml_legend=1 00:10:21.230 --rc geninfo_all_blocks=1 00:10:21.230 --rc geninfo_unexecuted_blocks=1 00:10:21.230 00:10:21.230 ' 00:10:21.230 20:28:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.230 --rc genhtml_branch_coverage=1 00:10:21.230 --rc genhtml_function_coverage=1 00:10:21.230 --rc genhtml_legend=1 00:10:21.230 --rc geninfo_all_blocks=1 00:10:21.230 --rc geninfo_unexecuted_blocks=1 00:10:21.230 00:10:21.230 ' 00:10:21.230 20:28:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.230 --rc genhtml_branch_coverage=1 00:10:21.230 --rc genhtml_function_coverage=1 00:10:21.230 --rc genhtml_legend=1 00:10:21.230 --rc geninfo_all_blocks=1 00:10:21.230 --rc geninfo_unexecuted_blocks=1 00:10:21.230 00:10:21.230 ' 00:10:21.230 20:28:29 version -- app/version.sh@17 -- # get_header_version major 00:10:21.230 20:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # cut -f2 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.230 20:28:29 version -- app/version.sh@17 -- # major=25 00:10:21.230 20:28:29 version -- app/version.sh@18 -- # get_header_version minor 00:10:21.230 20:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # cut -f2 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.230 20:28:29 version -- app/version.sh@18 -- # minor=1 00:10:21.230 20:28:29 version -- app/version.sh@19 -- # get_header_version patch 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # cut -f2 00:10:21.230 20:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.230 20:28:29 version -- app/version.sh@19 -- # patch=0 00:10:21.230 20:28:29 version -- app/version.sh@20 -- # get_header_version suffix 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # cut -f2 00:10:21.230 20:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:21.230 20:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.230 20:28:29 version -- app/version.sh@20 -- # suffix=-pre 00:10:21.230 20:28:29 version -- app/version.sh@22 -- # version=25.1 00:10:21.230 20:28:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:21.230 20:28:29 version -- app/version.sh@28 -- # version=25.1rc0 00:10:21.230 20:28:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:21.230 20:28:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:21.230 20:28:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:21.230 20:28:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:21.230 00:10:21.230 real 0m0.356s 00:10:21.230 user 0m0.215s 00:10:21.230 sys 0m0.197s 00:10:21.230 ************************************ 00:10:21.230 END TEST version 00:10:21.230 ************************************ 00:10:21.230 20:28:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.230 20:28:29 version -- common/autotest_common.sh@10 -- # set +x 00:10:21.230 20:28:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:21.230 20:28:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:21.230 20:28:29 -- spdk/autotest.sh@194 -- # uname -s 00:10:21.230 20:28:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:21.230 20:28:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:21.230 20:28:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:21.230 20:28:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:21.230 20:28:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:21.230 20:28:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.230 20:28:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.230 20:28:29 -- common/autotest_common.sh@10 -- # set +x 00:10:21.230 ************************************ 00:10:21.230 START TEST blockdev_nvme 00:10:21.230 ************************************ 00:10:21.230 20:28:29 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:21.489 * Looking for test storage... 00:10:21.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.489 20:28:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.489 --rc genhtml_branch_coverage=1 00:10:21.489 --rc genhtml_function_coverage=1 00:10:21.489 --rc genhtml_legend=1 00:10:21.489 --rc geninfo_all_blocks=1 00:10:21.489 --rc geninfo_unexecuted_blocks=1 00:10:21.489 00:10:21.489 ' 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.489 --rc genhtml_branch_coverage=1 00:10:21.489 --rc genhtml_function_coverage=1 00:10:21.489 --rc genhtml_legend=1 00:10:21.489 --rc geninfo_all_blocks=1 00:10:21.489 --rc geninfo_unexecuted_blocks=1 00:10:21.489 00:10:21.489 ' 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.489 --rc genhtml_branch_coverage=1 00:10:21.489 --rc genhtml_function_coverage=1 00:10:21.489 --rc genhtml_legend=1 00:10:21.489 --rc geninfo_all_blocks=1 00:10:21.489 --rc geninfo_unexecuted_blocks=1 00:10:21.489 00:10:21.489 ' 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.489 --rc genhtml_branch_coverage=1 00:10:21.489 --rc genhtml_function_coverage=1 00:10:21.489 --rc genhtml_legend=1 00:10:21.489 --rc geninfo_all_blocks=1 00:10:21.489 --rc geninfo_unexecuted_blocks=1 00:10:21.489 00:10:21.489 ' 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:21.489 20:28:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61134 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:21.489 20:28:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61134 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61134 ']' 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.489 20:28:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.746 [2024-11-25 20:28:29.685149] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:21.746 [2024-11-25 20:28:29.685512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61134 ] 00:10:21.746 [2024-11-25 20:28:29.869740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.003 [2024-11-25 20:28:29.993879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.937 20:28:30 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.937 20:28:30 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:22.937 20:28:30 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:22.937 20:28:30 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:10:22.937 20:28:30 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:22.937 20:28:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:22.937 20:28:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:22.937 20:28:30 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:22.937 20:28:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.937 20:28:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.196 20:28:31 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.196 20:28:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:10:23.196 20:28:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.196 20:28:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.196 20:28:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 20:28:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 20:28:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:23.454 20:28:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:23.454 20:28:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 20:28:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.454 20:28:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:23.454 20:28:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:23.455 20:28:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "10cc7ec9-7e18-44a5-bb24-61150167c61b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "10cc7ec9-7e18-44a5-bb24-61150167c61b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "52428e05-5009-4440-83f0-dff1109e8a60"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "52428e05-5009-4440-83f0-dff1109e8a60",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8c5f87a5-7afe-4ecf-9c6a-2795823bf46e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c5f87a5-7afe-4ecf-9c6a-2795823bf46e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "68cb666e-d6ca-468f-b380-0e81dff63055"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "68cb666e-d6ca-468f-b380-0e81dff63055",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "772a558b-6825-4f83-a738-71d3fa9b9bcf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "772a558b-6825-4f83-a738-71d3fa9b9bcf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "703a4815-3415-4238-a2ba-882b873ab811"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "703a4815-3415-4238-a2ba-882b873ab811",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:23.455 20:28:31 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:23.455 20:28:31 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:23.455 20:28:31 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:23.455 20:28:31 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61134 00:10:23.455 20:28:31 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61134 ']' 00:10:23.455 20:28:31 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61134 00:10:23.455 20:28:31 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:23.455 20:28:31 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.455 20:28:31 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61134 00:10:23.712 killing process with pid 61134 00:10:23.712 20:28:31 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.712 20:28:31 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.712 20:28:31 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61134' 00:10:23.712 20:28:31 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61134 00:10:23.712 20:28:31 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61134 00:10:26.275 20:28:34 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:26.275 20:28:34 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:26.275 20:28:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:26.275 20:28:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.275 20:28:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 ************************************ 00:10:26.275 START TEST bdev_hello_world 00:10:26.275 ************************************ 00:10:26.275 20:28:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:26.275 [2024-11-25 20:28:34.193981] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:26.275 [2024-11-25 20:28:34.194311] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:10:26.275 [2024-11-25 20:28:34.378218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.534 [2024-11-25 20:28:34.505364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.100 [2024-11-25 20:28:35.179625] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:27.100 [2024-11-25 20:28:35.179680] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:27.100 [2024-11-25 20:28:35.179704] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:27.100 [2024-11-25 20:28:35.182771] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:27.100 [2024-11-25 20:28:35.183273] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:27.100 [2024-11-25 20:28:35.183300] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:27.100 [2024-11-25 20:28:35.183856] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:27.100 00:10:27.100 [2024-11-25 20:28:35.183970] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:28.473 00:10:28.473 real 0m2.253s 00:10:28.473 user 0m1.874s 00:10:28.473 sys 0m0.270s 00:10:28.473 ************************************ 00:10:28.473 END TEST bdev_hello_world 00:10:28.473 ************************************ 00:10:28.473 20:28:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.473 20:28:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 20:28:36 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:28.473 20:28:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.473 20:28:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.473 20:28:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 ************************************ 00:10:28.473 START TEST bdev_bounds 00:10:28.473 ************************************ 00:10:28.473 Process bdevio pid: 61282 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61282 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61282' 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61282 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61282 ']' 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.473 20:28:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:28.473 [2024-11-25 20:28:36.525005] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:28.473 [2024-11-25 20:28:36.525384] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61282 ] 00:10:28.731 [2024-11-25 20:28:36.709434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:28.731 [2024-11-25 20:28:36.836637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.731 [2024-11-25 20:28:36.836856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.731 [2024-11-25 20:28:36.836882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.665 20:28:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.665 20:28:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:29.665 20:28:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:29.665 I/O targets: 00:10:29.665 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:29.665 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:29.665 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:29.665 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:29.665 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:29.665 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:29.665 00:10:29.665 00:10:29.665 CUnit - A unit testing framework for C - Version 2.1-3 00:10:29.665 http://cunit.sourceforge.net/ 00:10:29.665 00:10:29.665 00:10:29.665 Suite: bdevio tests on: Nvme3n1 00:10:29.665 Test: blockdev write read block ...passed 00:10:29.665 Test: blockdev write zeroes read block ...passed 00:10:29.665 Test: blockdev write zeroes read no split ...passed 00:10:29.665 Test: blockdev write zeroes read split ...passed 00:10:29.665 Test: blockdev write zeroes read split partial ...passed 00:10:29.665 Test: blockdev reset ...[2024-11-25 20:28:37.713346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:29.665 [2024-11-25 20:28:37.718346] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:10:29.665 Test: blockdev write read 8 blocks ...uccessful. 00:10:29.665 passed 00:10:29.665 Test: blockdev write read size > 128k ...passed 00:10:29.665 Test: blockdev write read invalid size ...passed 00:10:29.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:29.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:29.665 Test: blockdev write read max offset ...passed 00:10:29.665 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:29.665 Test: blockdev writev readv 8 blocks ...passed 00:10:29.665 Test: blockdev writev readv 30 x 1block ...passed 00:10:29.665 Test: blockdev writev readv block ...passed 00:10:29.665 Test: blockdev writev readv size > 128k ...passed 00:10:29.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:29.665 Test: blockdev comparev and writev ...[2024-11-25 20:28:37.728996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b100a000 len:0x1000 00:10:29.665 [2024-11-25 20:28:37.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:29.665 passed 00:10:29.665 Test: blockdev nvme passthru rw ...passed 00:10:29.665 Test: blockdev nvme passthru vendor specific ...passed 00:10:29.665 Test: blockdev nvme admin passthru ...[2024-11-25 20:28:37.730299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:29.665 [2024-11-25 20:28:37.730364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:29.665 passed 00:10:29.665 Test: blockdev copy ...passed 00:10:29.665 Suite: bdevio tests on: Nvme2n3 00:10:29.665 Test: blockdev write read block ...passed 00:10:29.665 Test: blockdev write zeroes read block ...passed 00:10:29.665 Test: blockdev write zeroes read no split ...passed 00:10:29.665 Test: blockdev write zeroes read split ...passed 00:10:29.924 Test: blockdev write zeroes read split partial ...passed 00:10:29.924 Test: blockdev reset ...[2024-11-25 20:28:37.811866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:29.924 [2024-11-25 20:28:37.817258] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:29.924 Test: blockdev write read 8 blocks ...uccessful. 00:10:29.924 passed 00:10:29.924 Test: blockdev write read size > 128k ...passed 00:10:29.924 Test: blockdev write read invalid size ...passed 00:10:29.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:29.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:29.924 Test: blockdev write read max offset ...passed 00:10:29.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:29.924 Test: blockdev writev readv 8 blocks ...passed 00:10:29.924 Test: blockdev writev readv 30 x 1block ...passed 00:10:29.924 Test: blockdev writev readv block ...passed 00:10:29.924 Test: blockdev writev readv size > 128k ...passed 00:10:29.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:29.924 Test: blockdev comparev and writev ...[2024-11-25 20:28:37.826867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x294a06000 len:0x1000 00:10:29.924 [2024-11-25 20:28:37.826933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:29.924 passed 00:10:29.924 Test: blockdev nvme passthru rw ...passed 00:10:29.924 Test: blockdev nvme passthru vendor specific ...[2024-11-25 20:28:37.827881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:10:29.924 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:10:29.924 [2024-11-25 20:28:37.828030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:29.924 passed 00:10:29.924 Test: blockdev copy ...passed 00:10:29.924 Suite: bdevio tests on: Nvme2n2 00:10:29.924 Test: blockdev write read block ...passed 00:10:29.925 Test: blockdev write zeroes read block ...passed 00:10:29.925 Test: blockdev write zeroes read no split ...passed 00:10:29.925 Test: blockdev write zeroes read split ...passed 00:10:29.925 Test: blockdev write zeroes read split partial ...passed 00:10:29.925 Test: blockdev reset ...[2024-11-25 20:28:37.908481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:29.925 [2024-11-25 20:28:37.913627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:29.925 Test: blockdev write read 8 blocks ...uccessful. 00:10:29.925 passed 00:10:29.925 Test: blockdev write read size > 128k ...passed 00:10:29.925 Test: blockdev write read invalid size ...passed 00:10:29.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:29.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:29.925 Test: blockdev write read max offset ...passed 00:10:29.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:29.925 Test: blockdev writev readv 8 blocks ...passed 00:10:29.925 Test: blockdev writev readv 30 x 1block ...passed 00:10:29.925 Test: blockdev writev readv block ...passed 00:10:29.925 Test: blockdev writev readv size > 128k ...passed 00:10:29.925 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:29.925 Test: blockdev comparev and writev ...[2024-11-25 20:28:37.924589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc83c000 len:0x1000 00:10:29.925 [2024-11-25 20:28:37.924808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:29.925 passed 00:10:29.925 Test: blockdev nvme passthru rw ...passed 00:10:29.925 Test: blockdev nvme passthru vendor specific ...[2024-11-25 20:28:37.926178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:29.925 [2024-11-25 20:28:37.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:29.925 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:29.925 passed 00:10:29.925 Test: blockdev copy ...passed 00:10:29.925 Suite: bdevio tests on: Nvme2n1 00:10:29.925 Test: blockdev write read block ...passed 00:10:29.925 Test: blockdev write zeroes read block ...passed 00:10:29.925 Test: blockdev write zeroes read no split ...passed 00:10:29.925 Test: blockdev write zeroes read split ...passed 00:10:29.925 Test: blockdev write zeroes read split partial ...passed 00:10:29.925 Test: blockdev reset ...[2024-11-25 20:28:38.010333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:29.925 passed 00:10:29.925 Test: blockdev write read 8 blocks ...[2024-11-25 20:28:38.015477] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:29.925 passed 00:10:29.925 Test: blockdev write read size > 128k ...passed 00:10:29.925 Test: blockdev write read invalid size ...passed 00:10:29.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:29.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:29.925 Test: blockdev write read max offset ...passed 00:10:29.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:29.925 Test: blockdev writev readv 8 blocks ...passed 00:10:29.925 Test: blockdev writev readv 30 x 1block ...passed 00:10:29.925 Test: blockdev writev readv block ...passed 00:10:29.925 Test: blockdev writev readv size > 128k ...passed 00:10:29.925 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:29.925 Test: blockdev comparev and writev ...[2024-11-25 20:28:38.025138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc838000 len:0x1000 00:10:29.925 [2024-11-25 20:28:38.025216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:29.925 passed 00:10:29.925 Test: blockdev nvme passthru rw ...passed 00:10:29.925 Test: blockdev nvme passthru vendor specific ...passed 00:10:29.925 Test: blockdev nvme admin passthru ...[2024-11-25 20:28:38.026274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:29.925 [2024-11-25 20:28:38.026315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:29.925 passed 00:10:29.925 Test: blockdev copy ...passed 00:10:29.925 Suite: bdevio tests on: Nvme1n1 00:10:29.925 Test: blockdev write read block ...passed 00:10:29.925 Test: blockdev write zeroes read block ...passed 00:10:29.925 Test: blockdev write zeroes read no split ...passed 00:10:30.183 Test: blockdev write zeroes read split ...passed 00:10:30.183 Test: blockdev write zeroes read split partial ...passed 00:10:30.183 Test: blockdev reset ...[2024-11-25 20:28:38.109950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:30.183 [2024-11-25 20:28:38.114955] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:10:30.183 Test: blockdev write read 8 blocks ...uccessful. 00:10:30.183 passed 00:10:30.183 Test: blockdev write read size > 128k ...passed 00:10:30.183 Test: blockdev write read invalid size ...passed 00:10:30.183 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:30.183 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:30.183 Test: blockdev write read max offset ...passed 00:10:30.183 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:30.183 Test: blockdev writev readv 8 blocks ...passed 00:10:30.183 Test: blockdev writev readv 30 x 1block ...passed 00:10:30.183 Test: blockdev writev readv block ...passed 00:10:30.183 Test: blockdev writev readv size > 128k ...passed 00:10:30.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:30.183 Test: blockdev comparev and writev ...[2024-11-25 20:28:38.125648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc834000 len:0x1000 00:10:30.183 [2024-11-25 20:28:38.125870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:30.183 passed 00:10:30.183 Test: blockdev nvme passthru rw ...passed 00:10:30.183 Test: blockdev nvme passthru vendor specific ...[2024-11-25 20:28:38.127198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:30.183 [2024-11-25 20:28:38.127302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:30.183 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:30.183 passed 00:10:30.183 Test: blockdev copy ...passed 00:10:30.183 Suite: bdevio tests on: Nvme0n1 00:10:30.183 Test: blockdev write read block ...passed 00:10:30.183 Test: blockdev write zeroes read block ...passed 00:10:30.183 Test: blockdev write zeroes read no split ...passed 00:10:30.183 Test: blockdev write zeroes read split ...passed 00:10:30.183 Test: blockdev write zeroes read split partial ...passed 00:10:30.183 Test: blockdev reset ...[2024-11-25 20:28:38.211303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:30.183 passed 00:10:30.183 Test: blockdev write read 8 blocks ...[2024-11-25 20:28:38.216144] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:30.183 passed 00:10:30.183 Test: blockdev write read size > 128k ...passed 00:10:30.183 Test: blockdev write read invalid size ...passed 00:10:30.183 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:30.183 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:30.183 Test: blockdev write read max offset ...passed 00:10:30.183 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:30.183 Test: blockdev writev readv 8 blocks ...passed 00:10:30.183 Test: blockdev writev readv 30 x 1block ...passed 00:10:30.183 Test: blockdev writev readv block ...passed 00:10:30.183 Test: blockdev writev readv size > 128k ...passed 00:10:30.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:30.183 Test: blockdev comparev and writev ...passed 00:10:30.183 Test: blockdev nvme passthru rw ...[2024-11-25 20:28:38.224839] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:30.183 separate metadata which is not supported yet. 00:10:30.183 passed 00:10:30.183 Test: blockdev nvme passthru vendor specific ...passed 00:10:30.183 Test: blockdev nvme admin passthru ...[2024-11-25 20:28:38.225345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:30.183 [2024-11-25 20:28:38.225407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:30.183 passed 00:10:30.183 Test: blockdev copy ...passed 00:10:30.183 00:10:30.183 Run Summary: Type Total Ran Passed Failed Inactive 00:10:30.183 suites 6 6 n/a 0 0 00:10:30.183 tests 138 138 138 0 0 00:10:30.183 asserts 893 893 893 0 n/a 00:10:30.183 00:10:30.183 Elapsed time = 1.597 seconds 00:10:30.183 0 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61282 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61282 ']' 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61282 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61282 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61282' 00:10:30.183 killing process with pid 61282 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61282 00:10:30.183 20:28:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61282 00:10:31.556 20:28:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:31.556 ************************************ 00:10:31.556 END TEST bdev_bounds 00:10:31.556 ************************************ 00:10:31.556 00:10:31.556 real 0m2.964s 00:10:31.556 user 0m7.532s 00:10:31.556 sys 0m0.452s 00:10:31.556 20:28:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.556 20:28:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:31.556 20:28:39 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:31.556 20:28:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:31.556 20:28:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.556 20:28:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.556 ************************************ 00:10:31.556 START TEST bdev_nbd 00:10:31.556 ************************************ 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61342 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61342 /var/tmp/spdk-nbd.sock 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61342 ']' 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:31.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.556 20:28:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:31.556 [2024-11-25 20:28:39.586736] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:31.556 [2024-11-25 20:28:39.586860] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.814 [2024-11-25 20:28:39.771403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.814 [2024-11-25 20:28:39.895615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.749 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.749 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:32.749 20:28:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:32.750 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.008 1+0 records in 00:10:33.008 1+0 records out 00:10:33.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561922 s, 7.3 MB/s 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:33.008 20:28:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.267 1+0 records in 00:10:33.267 1+0 records out 00:10:33.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790006 s, 5.2 MB/s 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:33.267 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.526 1+0 records in 00:10:33.526 1+0 records out 00:10:33.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654239 s, 6.3 MB/s 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:33.526 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.786 1+0 records in 00:10:33.786 1+0 records out 00:10:33.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107611 s, 3.8 MB/s 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:33.786 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.046 20:28:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.046 1+0 records in 00:10:34.046 1+0 records out 00:10:34.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123746 s, 3.3 MB/s 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.046 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.304 1+0 records in 00:10:34.304 1+0 records out 00:10:34.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815871 s, 5.0 MB/s 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.304 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd0", 00:10:34.564 "bdev_name": "Nvme0n1" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd1", 00:10:34.564 "bdev_name": "Nvme1n1" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd2", 00:10:34.564 "bdev_name": "Nvme2n1" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd3", 00:10:34.564 "bdev_name": "Nvme2n2" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd4", 00:10:34.564 "bdev_name": "Nvme2n3" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd5", 00:10:34.564 "bdev_name": "Nvme3n1" 00:10:34.564 } 00:10:34.564 ]' 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd0", 00:10:34.564 "bdev_name": "Nvme0n1" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd1", 00:10:34.564 "bdev_name": "Nvme1n1" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd2", 00:10:34.564 "bdev_name": "Nvme2n1" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd3", 00:10:34.564 "bdev_name": "Nvme2n2" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd4", 00:10:34.564 "bdev_name": "Nvme2n3" 00:10:34.564 }, 00:10:34.564 { 00:10:34.564 "nbd_device": "/dev/nbd5", 00:10:34.564 "bdev_name": "Nvme3n1" 00:10:34.564 } 00:10:34.564 ]' 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.564 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.823 20:28:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.082 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.341 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:35.600 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:35.600 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.601 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.861 20:28:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:36.120 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:36.121 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:36.380 /dev/nbd0 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.380 1+0 records in 00:10:36.380 1+0 records out 00:10:36.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732852 s, 5.6 MB/s 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:36.380 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:36.640 /dev/nbd1 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.640 1+0 records in 00:10:36.640 1+0 records out 00:10:36.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637095 s, 6.4 MB/s 00:10:36.640 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:36.898 /dev/nbd10 00:10:36.898 20:28:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.898 1+0 records in 00:10:36.898 1+0 records out 00:10:36.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702405 s, 5.8 MB/s 00:10:36.898 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:37.158 /dev/nbd11 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.158 1+0 records in 00:10:37.158 1+0 records out 00:10:37.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102702 s, 4.0 MB/s 00:10:37.158 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:37.418 /dev/nbd12 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.418 1+0 records in 00:10:37.418 1+0 records out 00:10:37.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553251 s, 7.4 MB/s 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.418 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:37.678 /dev/nbd13 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.678 1+0 records in 00:10:37.678 1+0 records out 00:10:37.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903547 s, 4.5 MB/s 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.678 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:37.938 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.938 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.938 20:28:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.938 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd0", 00:10:37.938 "bdev_name": "Nvme0n1" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd1", 00:10:37.938 "bdev_name": "Nvme1n1" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd10", 00:10:37.938 "bdev_name": "Nvme2n1" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd11", 00:10:37.938 "bdev_name": "Nvme2n2" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd12", 00:10:37.938 "bdev_name": "Nvme2n3" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd13", 00:10:37.938 "bdev_name": "Nvme3n1" 00:10:37.938 } 00:10:37.938 ]' 00:10:37.938 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd0", 00:10:37.938 "bdev_name": "Nvme0n1" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd1", 00:10:37.938 "bdev_name": "Nvme1n1" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd10", 00:10:37.938 "bdev_name": "Nvme2n1" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd11", 00:10:37.938 "bdev_name": "Nvme2n2" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd12", 00:10:37.938 "bdev_name": "Nvme2n3" 00:10:37.938 }, 00:10:37.938 { 00:10:37.938 "nbd_device": "/dev/nbd13", 00:10:37.938 "bdev_name": "Nvme3n1" 00:10:37.938 } 00:10:37.938 ]' 00:10:37.938 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:38.198 /dev/nbd1 00:10:38.198 /dev/nbd10 00:10:38.198 /dev/nbd11 00:10:38.198 /dev/nbd12 00:10:38.198 /dev/nbd13' 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:38.198 /dev/nbd1 00:10:38.198 /dev/nbd10 00:10:38.198 /dev/nbd11 00:10:38.198 /dev/nbd12 00:10:38.198 /dev/nbd13' 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:38.198 256+0 records in 00:10:38.198 256+0 records out 00:10:38.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117138 s, 89.5 MB/s 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:38.198 256+0 records in 00:10:38.198 256+0 records out 00:10:38.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126711 s, 8.3 MB/s 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.198 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:38.458 256+0 records in 00:10:38.458 256+0 records out 00:10:38.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134119 s, 7.8 MB/s 00:10:38.458 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.458 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:38.458 256+0 records in 00:10:38.458 256+0 records out 00:10:38.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129924 s, 8.1 MB/s 00:10:38.458 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.458 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:38.717 256+0 records in 00:10:38.717 256+0 records out 00:10:38.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13059 s, 8.0 MB/s 00:10:38.717 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.717 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:38.717 256+0 records in 00:10:38.717 256+0 records out 00:10:38.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131427 s, 8.0 MB/s 00:10:38.717 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.717 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:38.977 256+0 records in 00:10:38.977 256+0 records out 00:10:38.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143666 s, 7.3 MB/s 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.977 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.236 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:39.495 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:39.495 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:39.495 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.496 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.755 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.013 20:28:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:40.013 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:40.271 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:40.271 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:40.271 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.271 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.271 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.272 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.530 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:40.530 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:40.530 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.530 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:40.530 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:40.530 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:40.788 20:28:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:40.789 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.789 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:40.789 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:40.789 malloc_lvol_verify 00:10:40.789 20:28:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:41.047 deef7faa-60f6-427a-b4ba-013fc94ffc0a 00:10:41.047 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:41.305 95af4a72-ea33-461b-a156-4d2e1946bd4b 00:10:41.305 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:41.565 /dev/nbd0 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:41.565 mke2fs 1.47.0 (5-Feb-2023) 00:10:41.565 Discarding device blocks: 0/4096 done 00:10:41.565 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:41.565 00:10:41.565 Allocating group tables: 0/1 done 00:10:41.565 Writing inode tables: 0/1 done 00:10:41.565 Creating journal (1024 blocks): done 00:10:41.565 Writing superblocks and filesystem accounting information: 0/1 done 00:10:41.565 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.565 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61342 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61342 ']' 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61342 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61342 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.825 killing process with pid 61342 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61342' 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61342 00:10:41.825 20:28:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61342 00:10:43.203 20:28:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:43.203 00:10:43.203 real 0m11.601s 00:10:43.203 user 0m14.888s 00:10:43.203 sys 0m4.895s 00:10:43.203 20:28:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.203 ************************************ 00:10:43.203 END TEST bdev_nbd 00:10:43.203 ************************************ 00:10:43.204 20:28:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:43.204 20:28:51 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:43.204 20:28:51 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:10:43.204 skipping fio tests on NVMe due to multi-ns failures. 00:10:43.204 20:28:51 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:43.204 20:28:51 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:43.204 20:28:51 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.204 20:28:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:43.204 20:28:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.204 20:28:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.204 ************************************ 00:10:43.204 START TEST bdev_verify 00:10:43.204 ************************************ 00:10:43.204 20:28:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.204 [2024-11-25 20:28:51.248911] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:43.204 [2024-11-25 20:28:51.249036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61731 ] 00:10:43.463 [2024-11-25 20:28:51.432815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.463 [2024-11-25 20:28:51.554339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.463 [2024-11-25 20:28:51.554393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.401 Running I/O for 5 seconds... 00:10:46.309 17600.00 IOPS, 68.75 MiB/s [2024-11-25T20:28:55.382Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-25T20:28:56.779Z] 17216.00 IOPS, 67.25 MiB/s [2024-11-25T20:28:57.370Z] 17536.00 IOPS, 68.50 MiB/s [2024-11-25T20:28:57.370Z] 17561.60 IOPS, 68.60 MiB/s 00:10:49.234 Latency(us) 00:10:49.234 [2024-11-25T20:28:57.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.234 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x0 length 0xbd0bd 00:10:49.234 Nvme0n1 : 5.05 1317.22 5.15 0.00 0.00 96801.19 22845.48 85907.43 00:10:49.234 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:49.234 Nvme0n1 : 5.05 1571.57 6.14 0.00 0.00 81272.53 16844.59 77485.13 00:10:49.234 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x0 length 0xa0000 00:10:49.234 Nvme1n1 : 5.06 1316.25 5.14 0.00 0.00 96666.74 27161.91 79590.71 00:10:49.234 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0xa0000 length 0xa0000 00:10:49.234 Nvme1n1 : 5.05 1571.14 6.14 0.00 0.00 81167.47 16528.76 73273.99 00:10:49.234 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x0 length 0x80000 00:10:49.234 Nvme2n1 : 5.08 1323.34 5.17 0.00 0.00 95933.81 8369.66 76642.90 00:10:49.234 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x80000 length 0x80000 00:10:49.234 Nvme2n1 : 5.05 1570.76 6.14 0.00 0.00 80993.92 16212.92 71168.41 00:10:49.234 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x0 length 0x80000 00:10:49.234 Nvme2n2 : 5.08 1323.02 5.17 0.00 0.00 95722.25 8632.85 74958.44 00:10:49.234 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x80000 length 0x80000 00:10:49.234 Nvme2n2 : 5.05 1570.39 6.13 0.00 0.00 80893.57 16002.36 72852.87 00:10:49.234 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x0 length 0x80000 00:10:49.234 Nvme2n3 : 5.09 1332.19 5.20 0.00 0.00 95027.73 11001.63 80011.82 00:10:49.234 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x80000 length 0x80000 00:10:49.234 Nvme2n3 : 5.06 1569.41 6.13 0.00 0.00 80794.67 15054.86 73695.10 00:10:49.234 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x0 length 0x20000 00:10:49.234 Nvme3n1 : 5.09 1331.87 5.20 0.00 0.00 94874.37 11159.54 82117.40 00:10:49.234 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.234 Verification LBA range: start 0x20000 length 0x20000 00:10:49.234 Nvme3n1 : 5.07 1579.12 6.17 0.00 0.00 80188.91 3171.52 73273.99 00:10:49.234 [2024-11-25T20:28:57.370Z] =================================================================================================================== 00:10:49.234 [2024-11-25T20:28:57.370Z] Total : 17376.26 67.88 0.00 0.00 87733.04 3171.52 85907.43 00:10:51.136 00:10:51.136 real 0m7.596s 00:10:51.136 user 0m14.029s 00:10:51.136 sys 0m0.296s 00:10:51.136 20:28:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.136 20:28:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:51.136 ************************************ 00:10:51.136 END TEST bdev_verify 00:10:51.136 ************************************ 00:10:51.136 20:28:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:51.136 20:28:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:51.136 20:28:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.136 20:28:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:51.136 ************************************ 00:10:51.136 START TEST bdev_verify_big_io 00:10:51.136 ************************************ 00:10:51.136 20:28:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:51.136 [2024-11-25 20:28:58.927000] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:51.136 [2024-11-25 20:28:58.927140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61836 ] 00:10:51.136 [2024-11-25 20:28:59.115040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.136 [2024-11-25 20:28:59.228244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.136 [2024-11-25 20:28:59.228273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.073 Running I/O for 5 seconds... 00:10:57.141 1710.00 IOPS, 106.88 MiB/s [2024-11-25T20:29:05.856Z] 3178.50 IOPS, 198.66 MiB/s [2024-11-25T20:29:05.856Z] 3896.00 IOPS, 243.50 MiB/s 00:10:57.720 Latency(us) 00:10:57.720 [2024-11-25T20:29:05.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.720 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0x0 length 0xbd0b 00:10:57.720 Nvme0n1 : 5.60 137.21 8.58 0.00 0.00 909396.37 25056.33 956772.96 00:10:57.720 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:57.720 Nvme0n1 : 5.47 188.63 11.79 0.00 0.00 655345.36 16107.64 822016.21 00:10:57.720 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0x0 length 0xa000 00:10:57.720 Nvme1n1 : 5.69 137.94 8.62 0.00 0.00 875480.63 71589.53 950035.12 00:10:57.720 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0xa000 length 0xa000 00:10:57.720 Nvme1n1 : 5.47 192.01 12.00 0.00 0.00 630750.32 78327.36 680521.61 00:10:57.720 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0x0 length 0x8000 00:10:57.720 Nvme2n1 : 5.69 138.42 8.65 0.00 0.00 852496.26 90960.81 950035.12 00:10:57.720 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0x8000 length 0x8000 00:10:57.720 Nvme2n1 : 5.63 191.24 11.95 0.00 0.00 617748.13 51376.01 1280189.17 00:10:57.720 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0x0 length 0x8000 00:10:57.720 Nvme2n2 : 5.72 142.43 8.90 0.00 0.00 819589.82 20424.07 1003937.82 00:10:57.720 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.720 Verification LBA range: start 0x8000 length 0x8000 00:10:57.720 Nvme2n2 : 5.63 193.84 12.11 0.00 0.00 595607.87 53481.59 1300402.69 00:10:57.721 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.721 Verification LBA range: start 0x0 length 0x8000 00:10:57.721 Nvme2n3 : 5.73 152.28 9.52 0.00 0.00 749398.73 9264.53 805171.61 00:10:57.721 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.721 Verification LBA range: start 0x8000 length 0x8000 00:10:57.721 Nvme2n3 : 5.70 206.41 12.90 0.00 0.00 547087.06 22634.92 1313878.36 00:10:57.721 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.721 Verification LBA range: start 0x0 length 0x2000 00:10:57.721 Nvme3n1 : 5.74 156.70 9.79 0.00 0.00 709629.34 7001.03 805171.61 00:10:57.721 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.721 Verification LBA range: start 0x2000 length 0x2000 00:10:57.721 Nvme3n1 : 5.74 231.79 14.49 0.00 0.00 477020.00 2342.45 1340829.71 00:10:57.721 [2024-11-25T20:29:05.857Z] =================================================================================================================== 00:10:57.721 [2024-11-25T20:29:05.857Z] Total : 2068.90 129.31 0.00 0.00 680292.78 2342.45 1340829.71 00:10:59.660 00:10:59.660 real 0m8.890s 00:10:59.661 user 0m16.527s 00:10:59.661 sys 0m0.372s 00:10:59.661 20:29:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.661 20:29:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 ************************************ 00:10:59.661 END TEST bdev_verify_big_io 00:10:59.661 ************************************ 00:10:59.927 20:29:07 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.927 20:29:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:59.927 20:29:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.927 20:29:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.927 ************************************ 00:10:59.927 START TEST bdev_write_zeroes 00:10:59.927 ************************************ 00:10:59.927 20:29:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.927 [2024-11-25 20:29:07.898451] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:10:59.927 [2024-11-25 20:29:07.898601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61949 ] 00:11:00.186 [2024-11-25 20:29:08.078362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.186 [2024-11-25 20:29:08.224078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.121 Running I/O for 1 seconds... 00:11:02.056 79806.00 IOPS, 311.74 MiB/s 00:11:02.056 Latency(us) 00:11:02.056 [2024-11-25T20:29:10.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.056 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.056 Nvme0n1 : 1.02 13251.55 51.76 0.00 0.00 9640.41 8369.66 29899.16 00:11:02.056 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.056 Nvme1n1 : 1.02 13240.89 51.72 0.00 0.00 9636.86 8632.85 23792.99 00:11:02.056 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.056 Nvme2n1 : 1.02 13228.67 51.67 0.00 0.00 9610.59 8317.02 22108.53 00:11:02.056 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.056 Nvme2n2 : 1.02 13215.69 51.62 0.00 0.00 9602.68 8264.38 21476.86 00:11:02.056 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.056 Nvme2n3 : 1.02 13202.92 51.57 0.00 0.00 9580.81 8317.02 20108.23 00:11:02.056 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.056 Nvme3n1 : 1.02 13128.19 51.28 0.00 0.00 9588.10 7474.79 21792.69 00:11:02.056 [2024-11-25T20:29:10.192Z] =================================================================================================================== 00:11:02.056 [2024-11-25T20:29:10.192Z] Total : 79267.91 309.64 0.00 0.00 9609.93 7474.79 29899.16 00:11:03.433 00:11:03.433 real 0m3.532s 00:11:03.433 user 0m3.062s 00:11:03.433 sys 0m0.353s 00:11:03.433 20:29:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.433 20:29:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:03.433 ************************************ 00:11:03.433 END TEST bdev_write_zeroes 00:11:03.433 ************************************ 00:11:03.433 20:29:11 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.433 20:29:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:03.433 20:29:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.433 20:29:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:03.433 ************************************ 00:11:03.433 START TEST bdev_json_nonenclosed 00:11:03.433 ************************************ 00:11:03.434 20:29:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.434 [2024-11-25 20:29:11.502411] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:03.434 [2024-11-25 20:29:11.502547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62008 ] 00:11:03.692 [2024-11-25 20:29:11.686917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.951 [2024-11-25 20:29:11.838905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.951 [2024-11-25 20:29:11.839036] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:03.951 [2024-11-25 20:29:11.839062] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:03.951 [2024-11-25 20:29:11.839075] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.210 00:11:04.210 real 0m0.722s 00:11:04.210 user 0m0.449s 00:11:04.210 sys 0m0.168s 00:11:04.210 20:29:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.210 20:29:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:04.210 ************************************ 00:11:04.210 END TEST bdev_json_nonenclosed 00:11:04.210 ************************************ 00:11:04.210 20:29:12 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.210 20:29:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:04.210 20:29:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.210 20:29:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.210 ************************************ 00:11:04.210 START TEST bdev_json_nonarray 00:11:04.210 ************************************ 00:11:04.210 20:29:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.210 [2024-11-25 20:29:12.305556] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:04.210 [2024-11-25 20:29:12.305690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:11:04.468 [2024-11-25 20:29:12.488822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.728 [2024-11-25 20:29:12.644091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.728 [2024-11-25 20:29:12.644222] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:04.728 [2024-11-25 20:29:12.644248] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.728 [2024-11-25 20:29:12.644261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.987 00:11:04.987 real 0m0.721s 00:11:04.987 user 0m0.456s 00:11:04.987 sys 0m0.161s 00:11:04.987 20:29:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.987 20:29:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 ************************************ 00:11:04.987 END TEST bdev_json_nonarray 00:11:04.987 ************************************ 00:11:04.987 20:29:12 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:11:04.987 20:29:12 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:11:04.987 20:29:12 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:11:04.987 20:29:12 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:04.987 20:29:12 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:11:04.987 20:29:12 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:04.988 20:29:12 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:04.988 20:29:12 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:04.988 20:29:12 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:04.988 20:29:12 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:04.988 20:29:12 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:04.988 00:11:04.988 real 0m43.670s 00:11:04.988 user 1m3.783s 00:11:04.988 sys 0m8.178s 00:11:04.988 20:29:12 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.988 20:29:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.988 ************************************ 00:11:04.988 END TEST blockdev_nvme 00:11:04.988 ************************************ 00:11:04.988 20:29:13 -- spdk/autotest.sh@209 -- # uname -s 00:11:04.988 20:29:13 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:11:04.988 20:29:13 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:04.988 20:29:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.988 20:29:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.988 20:29:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.988 ************************************ 00:11:04.988 START TEST blockdev_nvme_gpt 00:11:04.988 ************************************ 00:11:04.988 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:05.247 * Looking for test storage... 00:11:05.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.247 20:29:13 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.247 --rc genhtml_branch_coverage=1 00:11:05.247 --rc genhtml_function_coverage=1 00:11:05.247 --rc genhtml_legend=1 00:11:05.247 --rc geninfo_all_blocks=1 00:11:05.247 --rc geninfo_unexecuted_blocks=1 00:11:05.247 00:11:05.247 ' 00:11:05.247 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.248 --rc genhtml_branch_coverage=1 00:11:05.248 --rc genhtml_function_coverage=1 00:11:05.248 --rc genhtml_legend=1 00:11:05.248 --rc geninfo_all_blocks=1 00:11:05.248 --rc geninfo_unexecuted_blocks=1 00:11:05.248 00:11:05.248 ' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.248 --rc genhtml_branch_coverage=1 00:11:05.248 --rc genhtml_function_coverage=1 00:11:05.248 --rc genhtml_legend=1 00:11:05.248 --rc geninfo_all_blocks=1 00:11:05.248 --rc geninfo_unexecuted_blocks=1 00:11:05.248 00:11:05.248 ' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.248 --rc genhtml_branch_coverage=1 00:11:05.248 --rc genhtml_function_coverage=1 00:11:05.248 --rc genhtml_legend=1 00:11:05.248 --rc geninfo_all_blocks=1 00:11:05.248 --rc geninfo_unexecuted_blocks=1 00:11:05.248 00:11:05.248 ' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62123 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62123 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62123 ']' 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.248 20:29:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:05.507 [2024-11-25 20:29:13.439523] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:05.507 [2024-11-25 20:29:13.440284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:11:05.507 [2024-11-25 20:29:13.626125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.768 [2024-11-25 20:29:13.771508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.707 20:29:14 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.707 20:29:14 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:11:06.707 20:29:14 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:06.707 20:29:14 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:11:06.707 20:29:14 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:07.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.535 Waiting for block devices as requested 00:11:07.794 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.794 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.053 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.053 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.328 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:13.328 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.328 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:13.329 BYT; 00:11:13.329 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:13.329 BYT; 00:11:13.329 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:13.329 20:29:21 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:13.329 20:29:21 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:14.267 The operation has completed successfully. 00:11:14.267 20:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:15.660 The operation has completed successfully. 00:11:15.660 20:29:23 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:16.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.796 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.796 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.796 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.054 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.054 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:17.054 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.054 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.054 [] 00:11:17.054 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.054 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:17.054 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:17.054 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:17.054 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:17.054 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:17.054 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.054 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.622 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:17.622 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:17.623 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8a76dce7-d1a6-41cc-a1b3-23fa2839cfda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8a76dce7-d1a6-41cc-a1b3-23fa2839cfda",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "000fde29-b549-4076-9fdc-0e6d8d30abaf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "000fde29-b549-4076-9fdc-0e6d8d30abaf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "91fcf55d-e457-427b-b7a8-7c24cbdd4026"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91fcf55d-e457-427b-b7a8-7c24cbdd4026",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "60226fc5-7a4a-4a7f-836c-ff1d1930f355"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "60226fc5-7a4a-4a7f-836c-ff1d1930f355",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0eee86af-f1d6-43ad-b7f6-bdffd8e2655e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0eee86af-f1d6-43ad-b7f6-bdffd8e2655e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:17.623 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:17.623 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:17.623 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:17.623 20:29:25 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62123 00:11:17.623 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62123 ']' 00:11:17.623 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62123 00:11:17.623 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:17.623 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.623 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62123 00:11:17.882 killing process with pid 62123 00:11:17.882 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.882 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.882 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62123' 00:11:17.882 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62123 00:11:17.882 20:29:25 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62123 00:11:20.416 20:29:28 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:20.416 20:29:28 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:20.416 20:29:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:20.416 20:29:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.416 20:29:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:20.416 ************************************ 00:11:20.416 START TEST bdev_hello_world 00:11:20.416 ************************************ 00:11:20.416 20:29:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:20.674 [2024-11-25 20:29:28.584726] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:20.674 [2024-11-25 20:29:28.584873] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62770 ] 00:11:20.674 [2024-11-25 20:29:28.769491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.932 [2024-11-25 20:29:28.925123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.869 [2024-11-25 20:29:29.658172] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:21.869 [2024-11-25 20:29:29.658232] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:21.869 [2024-11-25 20:29:29.658265] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:21.869 [2024-11-25 20:29:29.661615] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:21.869 [2024-11-25 20:29:29.662258] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:21.869 [2024-11-25 20:29:29.662445] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:21.869 [2024-11-25 20:29:29.662900] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:21.869 00:11:21.869 [2024-11-25 20:29:29.662995] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:22.805 00:11:22.805 real 0m2.286s 00:11:22.805 user 0m1.848s 00:11:22.805 sys 0m0.328s 00:11:22.805 ************************************ 00:11:22.805 END TEST bdev_hello_world 00:11:22.805 ************************************ 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:22.805 20:29:30 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:22.805 20:29:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.805 20:29:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.805 20:29:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:22.805 ************************************ 00:11:22.805 START TEST bdev_bounds 00:11:22.805 ************************************ 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62818 00:11:22.805 Process bdevio pid: 62818 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62818' 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62818 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62818 ']' 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.805 20:29:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 [2024-11-25 20:29:30.964982] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:23.064 [2024-11-25 20:29:30.965135] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62818 ] 00:11:23.064 [2024-11-25 20:29:31.154773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.323 [2024-11-25 20:29:31.276077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.323 [2024-11-25 20:29:31.276226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.323 [2024-11-25 20:29:31.276259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.890 20:29:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.890 20:29:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:23.890 20:29:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:24.148 I/O targets: 00:11:24.148 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:24.148 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:24.148 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:24.148 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:24.148 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:24.148 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:24.148 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:24.148 00:11:24.148 00:11:24.148 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.148 http://cunit.sourceforge.net/ 00:11:24.148 00:11:24.148 00:11:24.148 Suite: bdevio tests on: Nvme3n1 00:11:24.148 Test: blockdev write read block ...passed 00:11:24.148 Test: blockdev write zeroes read block ...passed 00:11:24.148 Test: blockdev write zeroes read no split ...passed 00:11:24.148 Test: blockdev write zeroes read split ...passed 00:11:24.148 Test: blockdev write zeroes read split partial ...passed 00:11:24.148 Test: blockdev reset ...[2024-11-25 20:29:32.160029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:24.148 [2024-11-25 20:29:32.163893] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:24.148 passed 00:11:24.148 Test: blockdev write read 8 blocks ...passed 00:11:24.148 Test: blockdev write read size > 128k ...passed 00:11:24.148 Test: blockdev write read invalid size ...passed 00:11:24.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.148 Test: blockdev write read max offset ...passed 00:11:24.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.148 Test: blockdev writev readv 8 blocks ...passed 00:11:24.148 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.148 Test: blockdev writev readv block ...passed 00:11:24.148 Test: blockdev writev readv size > 128k ...passed 00:11:24.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.148 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.173110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af004000 len:0x1000 00:11:24.148 [2024-11-25 20:29:32.173162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:24.148 passed 00:11:24.148 Test: blockdev nvme passthru rw ...passed 00:11:24.148 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.148 Test: blockdev nvme admin passthru ...[2024-11-25 20:29:32.173876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:24.148 [2024-11-25 20:29:32.173912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:24.148 passed 00:11:24.148 Test: blockdev copy ...passed 00:11:24.148 Suite: bdevio tests on: Nvme2n3 00:11:24.148 Test: blockdev write read block ...passed 00:11:24.148 Test: blockdev write zeroes read block ...passed 00:11:24.148 Test: blockdev write zeroes read no split ...passed 00:11:24.148 Test: blockdev write zeroes read split ...passed 00:11:24.148 Test: blockdev write zeroes read split partial ...passed 00:11:24.148 Test: blockdev reset ...[2024-11-25 20:29:32.252161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:24.148 [2024-11-25 20:29:32.256300] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:24.148 passed 00:11:24.149 Test: blockdev write read 8 blocks ...passed 00:11:24.149 Test: blockdev write read size > 128k ...passed 00:11:24.149 Test: blockdev write read invalid size ...passed 00:11:24.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.149 Test: blockdev write read max offset ...passed 00:11:24.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.149 Test: blockdev writev readv 8 blocks ...passed 00:11:24.149 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.149 Test: blockdev writev readv block ...passed 00:11:24.149 Test: blockdev writev readv size > 128k ...passed 00:11:24.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.149 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.265348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af002000 len:0x1000 00:11:24.149 [2024-11-25 20:29:32.265397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:24.149 passed 00:11:24.149 Test: blockdev nvme passthru rw ...passed 00:11:24.149 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.149 Test: blockdev nvme admin passthru ...[2024-11-25 20:29:32.266185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:24.149 [2024-11-25 20:29:32.266222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:24.149 passed 00:11:24.149 Test: blockdev copy ...passed 00:11:24.149 Suite: bdevio tests on: Nvme2n2 00:11:24.149 Test: blockdev write read block ...passed 00:11:24.149 Test: blockdev write zeroes read block ...passed 00:11:24.408 Test: blockdev write zeroes read no split ...passed 00:11:24.408 Test: blockdev write zeroes read split ...passed 00:11:24.408 Test: blockdev write zeroes read split partial ...passed 00:11:24.408 Test: blockdev reset ...[2024-11-25 20:29:32.345805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:24.408 passed 00:11:24.408 Test: blockdev write read 8 blocks ...[2024-11-25 20:29:32.350006] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:24.408 passed 00:11:24.408 Test: blockdev write read size > 128k ...passed 00:11:24.408 Test: blockdev write read invalid size ...passed 00:11:24.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.408 Test: blockdev write read max offset ...passed 00:11:24.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.408 Test: blockdev writev readv 8 blocks ...passed 00:11:24.408 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.408 Test: blockdev writev readv block ...passed 00:11:24.408 Test: blockdev writev readv size > 128k ...passed 00:11:24.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.408 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.359364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e38000 len:0x1000 00:11:24.408 [2024-11-25 20:29:32.359414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:24.408 passed 00:11:24.408 Test: blockdev nvme passthru rw ...passed 00:11:24.408 Test: blockdev nvme passthru vendor specific ...[2024-11-25 20:29:32.360337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:24.408 [2024-11-25 20:29:32.360372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:11:24.408 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:11:24.408 passed 00:11:24.408 Test: blockdev copy ...passed 00:11:24.408 Suite: bdevio tests on: Nvme2n1 00:11:24.408 Test: blockdev write read block ...passed 00:11:24.408 Test: blockdev write zeroes read block ...passed 00:11:24.408 Test: blockdev write zeroes read no split ...passed 00:11:24.408 Test: blockdev write zeroes read split ...passed 00:11:24.408 Test: blockdev write zeroes read split partial ...passed 00:11:24.408 Test: blockdev reset ...[2024-11-25 20:29:32.459780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:24.408 [2024-11-25 20:29:32.463865] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:24.408 passed 00:11:24.408 Test: blockdev write read 8 blocks ...passed 00:11:24.408 Test: blockdev write read size > 128k ...passed 00:11:24.408 Test: blockdev write read invalid size ...passed 00:11:24.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.408 Test: blockdev write read max offset ...passed 00:11:24.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.408 Test: blockdev writev readv 8 blocks ...passed 00:11:24.408 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.408 Test: blockdev writev readv block ...passed 00:11:24.408 Test: blockdev writev readv size > 128k ...passed 00:11:24.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.408 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.473270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e34000 len:0x1000 00:11:24.408 [2024-11-25 20:29:32.473323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:24.408 passed 00:11:24.408 Test: blockdev nvme passthru rw ...passed 00:11:24.408 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.408 Test: blockdev nvme admin passthru ...[2024-11-25 20:29:32.474214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:24.408 [2024-11-25 20:29:32.474247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:24.408 passed 00:11:24.408 Test: blockdev copy ...passed 00:11:24.408 Suite: bdevio tests on: Nvme1n1p2 00:11:24.408 Test: blockdev write read block ...passed 00:11:24.408 Test: blockdev write zeroes read block ...passed 00:11:24.408 Test: blockdev write zeroes read no split ...passed 00:11:24.408 Test: blockdev write zeroes read split ...passed 00:11:24.667 Test: blockdev write zeroes read split partial ...passed 00:11:24.667 Test: blockdev reset ...[2024-11-25 20:29:32.557333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:24.667 [2024-11-25 20:29:32.561120] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:24.667 passed 00:11:24.667 Test: blockdev write read 8 blocks ...passed 00:11:24.667 Test: blockdev write read size > 128k ...passed 00:11:24.667 Test: blockdev write read invalid size ...passed 00:11:24.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.667 Test: blockdev write read max offset ...passed 00:11:24.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.667 Test: blockdev writev readv 8 blocks ...passed 00:11:24.667 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.667 Test: blockdev writev readv block ...passed 00:11:24.667 Test: blockdev writev readv size > 128k ...passed 00:11:24.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.667 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.570184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c1e30000 len:0x1000 00:11:24.667 [2024-11-25 20:29:32.570232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:24.667 passed 00:11:24.667 Test: blockdev nvme passthru rw ...passed 00:11:24.667 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.667 Test: blockdev nvme admin passthru ...passed 00:11:24.667 Test: blockdev copy ...passed 00:11:24.667 Suite: bdevio tests on: Nvme1n1p1 00:11:24.667 Test: blockdev write read block ...passed 00:11:24.667 Test: blockdev write zeroes read block ...passed 00:11:24.667 Test: blockdev write zeroes read no split ...passed 00:11:24.667 Test: blockdev write zeroes read split ...passed 00:11:24.667 Test: blockdev write zeroes read split partial ...passed 00:11:24.667 Test: blockdev reset ...[2024-11-25 20:29:32.661024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:24.667 [2024-11-25 20:29:32.665382] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:24.667 passed 00:11:24.667 Test: blockdev write read 8 blocks ...passed 00:11:24.667 Test: blockdev write read size > 128k ...passed 00:11:24.667 Test: blockdev write read invalid size ...passed 00:11:24.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.667 Test: blockdev write read max offset ...passed 00:11:24.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.667 Test: blockdev writev readv 8 blocks ...passed 00:11:24.667 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.667 Test: blockdev writev readv block ...passed 00:11:24.667 Test: blockdev writev readv size > 128k ...passed 00:11:24.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.667 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.675222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2afa0e000 len:0x1000 00:11:24.667 [2024-11-25 20:29:32.675273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:24.667 passed 00:11:24.667 Test: blockdev nvme passthru rw ...passed 00:11:24.667 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.667 Test: blockdev nvme admin passthru ...passed 00:11:24.667 Test: blockdev copy ...passed 00:11:24.667 Suite: bdevio tests on: Nvme0n1 00:11:24.667 Test: blockdev write read block ...passed 00:11:24.667 Test: blockdev write zeroes read block ...passed 00:11:24.667 Test: blockdev write zeroes read no split ...passed 00:11:24.667 Test: blockdev write zeroes read split ...passed 00:11:24.667 Test: blockdev write zeroes read split partial ...passed 00:11:24.667 Test: blockdev reset ...[2024-11-25 20:29:32.765939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:24.667 passed 00:11:24.667 Test: blockdev write read 8 blocks ...[2024-11-25 20:29:32.769752] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:24.667 passed 00:11:24.667 Test: blockdev write read size > 128k ...passed 00:11:24.667 Test: blockdev write read invalid size ...passed 00:11:24.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.667 Test: blockdev write read max offset ...passed 00:11:24.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.667 Test: blockdev writev readv 8 blocks ...passed 00:11:24.667 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.667 Test: blockdev writev readv block ...passed 00:11:24.667 Test: blockdev writev readv size > 128k ...passed 00:11:24.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.667 Test: blockdev comparev and writev ...[2024-11-25 20:29:32.776820] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:24.667 separate metadata which is not supported yet. 00:11:24.667 passed 00:11:24.667 Test: blockdev nvme passthru rw ...passed 00:11:24.667 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.667 Test: blockdev nvme admin passthru ...[2024-11-25 20:29:32.777227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:24.667 [2024-11-25 20:29:32.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:24.667 passed 00:11:24.667 Test: blockdev copy ...passed 00:11:24.667 00:11:24.667 Run Summary: Type Total Ran Passed Failed Inactive 00:11:24.667 suites 7 7 n/a 0 0 00:11:24.667 tests 161 161 161 0 0 00:11:24.667 asserts 1025 1025 1025 0 n/a 00:11:24.667 00:11:24.667 Elapsed time = 1.954 seconds 00:11:24.667 0 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62818 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62818 ']' 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62818 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62818 00:11:24.925 killing process with pid 62818 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62818' 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62818 00:11:24.925 20:29:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62818 00:11:25.890 20:29:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:25.890 00:11:25.890 real 0m3.085s 00:11:25.890 user 0m7.842s 00:11:25.890 sys 0m0.463s 00:11:25.890 20:29:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.890 20:29:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:25.890 ************************************ 00:11:25.890 END TEST bdev_bounds 00:11:25.890 ************************************ 00:11:25.890 20:29:34 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:25.890 20:29:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.890 20:29:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.890 20:29:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.149 ************************************ 00:11:26.149 START TEST bdev_nbd 00:11:26.149 ************************************ 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62883 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62883 /var/tmp/spdk-nbd.sock 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62883 ']' 00:11:26.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.149 20:29:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:26.149 [2024-11-25 20:29:34.128735] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:26.149 [2024-11-25 20:29:34.129711] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.408 [2024-11-25 20:29:34.334312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.408 [2024-11-25 20:29:34.480554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:27.345 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:27.604 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.605 1+0 records in 00:11:27.605 1+0 records out 00:11:27.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665629 s, 6.2 MB/s 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:27.605 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.863 1+0 records in 00:11:27.863 1+0 records out 00:11:27.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489276 s, 8.4 MB/s 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:27.863 20:29:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.122 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.122 1+0 records in 00:11:28.122 1+0 records out 00:11:28.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780446 s, 5.2 MB/s 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.123 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.381 1+0 records in 00:11:28.381 1+0 records out 00:11:28.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700173 s, 5.8 MB/s 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.381 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.640 1+0 records in 00:11:28.640 1+0 records out 00:11:28.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000938014 s, 4.4 MB/s 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.640 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.899 1+0 records in 00:11:28.899 1+0 records out 00:11:28.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00224042 s, 1.8 MB/s 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.899 20:29:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.158 1+0 records in 00:11:29.158 1+0 records out 00:11:29.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857647 s, 4.8 MB/s 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:29.158 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.416 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd0", 00:11:29.416 "bdev_name": "Nvme0n1" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd1", 00:11:29.416 "bdev_name": "Nvme1n1p1" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd2", 00:11:29.416 "bdev_name": "Nvme1n1p2" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd3", 00:11:29.416 "bdev_name": "Nvme2n1" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd4", 00:11:29.416 "bdev_name": "Nvme2n2" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd5", 00:11:29.416 "bdev_name": "Nvme2n3" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd6", 00:11:29.416 "bdev_name": "Nvme3n1" 00:11:29.416 } 00:11:29.416 ]' 00:11:29.416 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:29.416 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd0", 00:11:29.416 "bdev_name": "Nvme0n1" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd1", 00:11:29.416 "bdev_name": "Nvme1n1p1" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd2", 00:11:29.416 "bdev_name": "Nvme1n1p2" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd3", 00:11:29.416 "bdev_name": "Nvme2n1" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd4", 00:11:29.416 "bdev_name": "Nvme2n2" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd5", 00:11:29.416 "bdev_name": "Nvme2n3" 00:11:29.416 }, 00:11:29.416 { 00:11:29.416 "nbd_device": "/dev/nbd6", 00:11:29.417 "bdev_name": "Nvme3n1" 00:11:29.417 } 00:11:29.417 ]' 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.417 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.675 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.934 20:29:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.192 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.451 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.711 20:29:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.970 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:31.227 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.228 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:31.497 /dev/nbd0 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.497 1+0 records in 00:11:31.497 1+0 records out 00:11:31.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717503 s, 5.7 MB/s 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.497 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:31.771 /dev/nbd1 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.771 1+0 records in 00:11:31.771 1+0 records out 00:11:31.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526094 s, 7.8 MB/s 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.771 20:29:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:32.029 /dev/nbd10 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.029 1+0 records in 00:11:32.029 1+0 records out 00:11:32.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060709 s, 6.7 MB/s 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.029 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:32.287 /dev/nbd11 00:11:32.287 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.288 1+0 records in 00:11:32.288 1+0 records out 00:11:32.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064406 s, 6.4 MB/s 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.288 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:32.547 /dev/nbd12 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.547 1+0 records in 00:11:32.547 1+0 records out 00:11:32.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722114 s, 5.7 MB/s 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.547 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:32.806 /dev/nbd13 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.806 1+0 records in 00:11:32.806 1+0 records out 00:11:32.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745596 s, 5.5 MB/s 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.806 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.807 20:29:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:33.065 /dev/nbd14 00:11:33.065 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:33.065 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.066 1+0 records in 00:11:33.066 1+0 records out 00:11:33.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794494 s, 5.2 MB/s 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.066 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd0", 00:11:33.325 "bdev_name": "Nvme0n1" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd1", 00:11:33.325 "bdev_name": "Nvme1n1p1" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd10", 00:11:33.325 "bdev_name": "Nvme1n1p2" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd11", 00:11:33.325 "bdev_name": "Nvme2n1" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd12", 00:11:33.325 "bdev_name": "Nvme2n2" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd13", 00:11:33.325 "bdev_name": "Nvme2n3" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd14", 00:11:33.325 "bdev_name": "Nvme3n1" 00:11:33.325 } 00:11:33.325 ]' 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd0", 00:11:33.325 "bdev_name": "Nvme0n1" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd1", 00:11:33.325 "bdev_name": "Nvme1n1p1" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd10", 00:11:33.325 "bdev_name": "Nvme1n1p2" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd11", 00:11:33.325 "bdev_name": "Nvme2n1" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd12", 00:11:33.325 "bdev_name": "Nvme2n2" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd13", 00:11:33.325 "bdev_name": "Nvme2n3" 00:11:33.325 }, 00:11:33.325 { 00:11:33.325 "nbd_device": "/dev/nbd14", 00:11:33.325 "bdev_name": "Nvme3n1" 00:11:33.325 } 00:11:33.325 ]' 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:33.325 /dev/nbd1 00:11:33.325 /dev/nbd10 00:11:33.325 /dev/nbd11 00:11:33.325 /dev/nbd12 00:11:33.325 /dev/nbd13 00:11:33.325 /dev/nbd14' 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:33.325 /dev/nbd1 00:11:33.325 /dev/nbd10 00:11:33.325 /dev/nbd11 00:11:33.325 /dev/nbd12 00:11:33.325 /dev/nbd13 00:11:33.325 /dev/nbd14' 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:33.325 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:33.326 256+0 records in 00:11:33.326 256+0 records out 00:11:33.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122807 s, 85.4 MB/s 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.326 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:33.585 256+0 records in 00:11:33.585 256+0 records out 00:11:33.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14795 s, 7.1 MB/s 00:11:33.585 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.585 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:33.585 256+0 records in 00:11:33.585 256+0 records out 00:11:33.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154406 s, 6.8 MB/s 00:11:33.585 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.585 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:33.844 256+0 records in 00:11:33.844 256+0 records out 00:11:33.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152496 s, 6.9 MB/s 00:11:33.844 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.844 20:29:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:34.103 256+0 records in 00:11:34.103 256+0 records out 00:11:34.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153664 s, 6.8 MB/s 00:11:34.103 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.103 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:34.103 256+0 records in 00:11:34.103 256+0 records out 00:11:34.103 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160014 s, 6.6 MB/s 00:11:34.103 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.103 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:34.362 256+0 records in 00:11:34.362 256+0 records out 00:11:34.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151576 s, 6.9 MB/s 00:11:34.362 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.362 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:34.622 256+0 records in 00:11:34.622 256+0 records out 00:11:34.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156761 s, 6.7 MB/s 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.622 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.881 20:29:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.139 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.398 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.657 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.916 20:29:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.175 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.443 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:36.444 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:36.707 malloc_lvol_verify 00:11:36.707 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:36.966 119bd88c-862a-4fa9-978e-1e50e8e2463a 00:11:36.966 20:29:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:37.226 51e347e5-ef54-4ee4-bfad-cf7c4cf30a0b 00:11:37.226 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:37.226 /dev/nbd0 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:37.485 mke2fs 1.47.0 (5-Feb-2023) 00:11:37.485 Discarding device blocks: 0/4096 done 00:11:37.485 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:37.485 00:11:37.485 Allocating group tables: 0/1 done 00:11:37.485 Writing inode tables: 0/1 done 00:11:37.485 Creating journal (1024 blocks): done 00:11:37.485 Writing superblocks and filesystem accounting information: 0/1 done 00:11:37.485 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.485 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62883 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62883 ']' 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62883 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62883 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.744 killing process with pid 62883 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62883' 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62883 00:11:37.744 20:29:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62883 00:11:39.168 20:29:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:39.168 00:11:39.168 real 0m13.005s 00:11:39.168 user 0m16.527s 00:11:39.168 sys 0m5.470s 00:11:39.168 20:29:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.168 20:29:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:39.168 ************************************ 00:11:39.168 END TEST bdev_nbd 00:11:39.168 ************************************ 00:11:39.168 20:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:39.168 20:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:11:39.168 20:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:11:39.168 skipping fio tests on NVMe due to multi-ns failures. 00:11:39.168 20:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:39.168 20:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:39.168 20:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:39.168 20:29:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:39.168 20:29:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.168 20:29:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:39.168 ************************************ 00:11:39.168 START TEST bdev_verify 00:11:39.168 ************************************ 00:11:39.168 20:29:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:39.168 [2024-11-25 20:29:47.209451] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:39.168 [2024-11-25 20:29:47.209591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:11:39.427 [2024-11-25 20:29:47.399640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:39.427 [2024-11-25 20:29:47.553251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.427 [2024-11-25 20:29:47.553284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.364 Running I/O for 5 seconds... 00:11:42.674 22016.00 IOPS, 86.00 MiB/s [2024-11-25T20:29:51.747Z] 22208.00 IOPS, 86.75 MiB/s [2024-11-25T20:29:52.685Z] 22997.33 IOPS, 89.83 MiB/s [2024-11-25T20:29:53.623Z] 23200.00 IOPS, 90.62 MiB/s [2024-11-25T20:29:53.623Z] 22886.40 IOPS, 89.40 MiB/s 00:11:45.487 Latency(us) 00:11:45.487 [2024-11-25T20:29:53.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.487 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0xbd0bd 00:11:45.487 Nvme0n1 : 5.06 1619.92 6.33 0.00 0.00 78728.15 18107.94 80432.94 00:11:45.487 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:45.487 Nvme0n1 : 5.05 1596.92 6.24 0.00 0.00 79904.00 18213.22 79590.71 00:11:45.487 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0x4ff80 00:11:45.487 Nvme1n1p1 : 5.06 1619.50 6.33 0.00 0.00 78502.04 20318.79 64430.57 00:11:45.487 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:45.487 Nvme1n1p1 : 5.05 1596.43 6.24 0.00 0.00 79807.01 21055.74 73273.99 00:11:45.487 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0x4ff7f 00:11:45.487 Nvme1n1p2 : 5.06 1619.04 6.32 0.00 0.00 78352.49 20318.79 62746.11 00:11:45.487 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:45.487 Nvme1n1p2 : 5.05 1595.97 6.23 0.00 0.00 79653.88 23056.04 66957.26 00:11:45.487 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0x80000 00:11:45.487 Nvme2n1 : 5.08 1626.40 6.35 0.00 0.00 77900.15 5106.02 65693.92 00:11:45.487 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x80000 length 0x80000 00:11:45.487 Nvme2n1 : 5.07 1604.64 6.27 0.00 0.00 79192.60 4421.71 69062.84 00:11:45.487 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0x80000 00:11:45.487 Nvme2n2 : 5.08 1626.04 6.35 0.00 0.00 77784.04 5132.34 68220.61 00:11:45.487 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x80000 length 0x80000 00:11:45.487 Nvme2n2 : 5.07 1604.20 6.27 0.00 0.00 79094.75 4421.71 69062.84 00:11:45.487 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0x80000 00:11:45.487 Nvme2n3 : 5.09 1634.97 6.39 0.00 0.00 77327.06 9264.53 70326.18 00:11:45.487 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x80000 length 0x80000 00:11:45.487 Nvme2n3 : 5.07 1603.78 6.26 0.00 0.00 78982.84 4553.30 68220.61 00:11:45.487 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x0 length 0x20000 00:11:45.487 Nvme3n1 : 5.09 1634.60 6.39 0.00 0.00 77273.26 9317.17 73273.99 00:11:45.487 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.487 Verification LBA range: start 0x20000 length 0x20000 00:11:45.487 Nvme3n1 : 5.08 1611.97 6.30 0.00 0.00 78511.53 11054.27 66957.26 00:11:45.487 [2024-11-25T20:29:53.623Z] =================================================================================================================== 00:11:45.487 [2024-11-25T20:29:53.623Z] Total : 22594.39 88.26 0.00 0.00 78635.36 4421.71 80432.94 00:11:47.423 00:11:47.423 real 0m7.979s 00:11:47.423 user 0m14.600s 00:11:47.423 sys 0m0.419s 00:11:47.423 20:29:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.423 20:29:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:47.423 ************************************ 00:11:47.423 END TEST bdev_verify 00:11:47.423 ************************************ 00:11:47.423 20:29:55 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:47.423 20:29:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:47.423 20:29:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.423 20:29:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:47.423 ************************************ 00:11:47.423 START TEST bdev_verify_big_io 00:11:47.423 ************************************ 00:11:47.423 20:29:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:47.423 [2024-11-25 20:29:55.263864] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:47.423 [2024-11-25 20:29:55.264007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:11:47.423 [2024-11-25 20:29:55.453934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:47.682 [2024-11-25 20:29:55.598806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.682 [2024-11-25 20:29:55.598842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.618 Running I/O for 5 seconds... 00:11:53.730 1655.00 IOPS, 103.44 MiB/s [2024-11-25T20:30:02.803Z] 3167.00 IOPS, 197.94 MiB/s [2024-11-25T20:30:02.804Z] 3790.67 IOPS, 236.92 MiB/s 00:11:54.668 Latency(us) 00:11:54.668 [2024-11-25T20:30:02.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.668 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0xbd0b 00:11:54.668 Nvme0n1 : 5.63 130.85 8.18 0.00 0.00 926429.05 21161.02 970248.64 00:11:54.668 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:54.668 Nvme0n1 : 5.70 123.50 7.72 0.00 0.00 988049.11 29899.16 1273451.33 00:11:54.668 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0x4ff8 00:11:54.668 Nvme1n1p1 : 5.71 139.66 8.73 0.00 0.00 867192.38 74958.44 902870.26 00:11:54.668 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:54.668 Nvme1n1p1 : 5.70 135.06 8.44 0.00 0.00 880323.93 86328.55 889394.58 00:11:54.668 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0x4ff7 00:11:54.668 Nvme1n1p2 : 5.76 144.52 9.03 0.00 0.00 823934.38 39163.68 805171.61 00:11:54.668 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:54.668 Nvme1n1p2 : 5.70 139.39 8.71 0.00 0.00 846562.17 97698.65 808540.53 00:11:54.668 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0x8000 00:11:54.668 Nvme2n1 : 5.79 137.32 8.58 0.00 0.00 843707.99 41058.70 1516013.49 00:11:54.668 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x8000 length 0x8000 00:11:54.668 Nvme2n1 : 5.74 144.36 9.02 0.00 0.00 805179.48 36005.32 822016.21 00:11:54.668 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0x8000 00:11:54.668 Nvme2n2 : 5.81 141.16 8.82 0.00 0.00 801228.02 28214.70 1536227.01 00:11:54.668 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x8000 length 0x8000 00:11:54.668 Nvme2n2 : 5.80 149.35 9.33 0.00 0.00 759663.43 18423.78 852336.48 00:11:54.668 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0x8000 00:11:54.668 Nvme2n3 : 5.81 145.40 9.09 0.00 0.00 759584.74 17160.43 1556440.52 00:11:54.668 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x8000 length 0x8000 00:11:54.668 Nvme2n3 : 5.80 154.46 9.65 0.00 0.00 719598.23 33689.19 855705.39 00:11:54.668 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x0 length 0x2000 00:11:54.668 Nvme3n1 : 5.90 170.63 10.66 0.00 0.00 633421.97 1987.14 1583391.87 00:11:54.668 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.668 Verification LBA range: start 0x2000 length 0x2000 00:11:54.668 Nvme3n1 : 5.88 174.21 10.89 0.00 0.00 624666.40 1223.87 875918.91 00:11:54.668 [2024-11-25T20:30:02.804Z] =================================================================================================================== 00:11:54.668 [2024-11-25T20:30:02.804Z] Total : 2029.86 126.87 0.00 0.00 795699.32 1223.87 1583391.87 00:11:56.048 00:11:56.048 real 0m8.811s 00:11:56.048 user 0m16.302s 00:11:56.048 sys 0m0.425s 00:11:56.048 20:30:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.048 ************************************ 00:11:56.048 END TEST bdev_verify_big_io 00:11:56.048 ************************************ 00:11:56.048 20:30:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.048 20:30:04 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.048 20:30:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:56.048 20:30:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.048 20:30:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:56.048 ************************************ 00:11:56.048 START TEST bdev_write_zeroes 00:11:56.048 ************************************ 00:11:56.048 20:30:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.048 [2024-11-25 20:30:04.155263] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:56.048 [2024-11-25 20:30:04.155405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63533 ] 00:11:56.307 [2024-11-25 20:30:04.334583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.565 [2024-11-25 20:30:04.451028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.134 Running I/O for 1 seconds... 00:11:58.068 70272.00 IOPS, 274.50 MiB/s 00:11:58.068 Latency(us) 00:11:58.068 [2024-11-25T20:30:06.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.068 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme0n1 : 1.02 10003.61 39.08 0.00 0.00 12762.05 10685.79 26530.24 00:11:58.068 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme1n1p1 : 1.02 9992.78 39.03 0.00 0.00 12727.26 10896.35 22740.20 00:11:58.068 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme1n1p2 : 1.03 9982.26 38.99 0.00 0.00 12685.76 10475.23 22003.25 00:11:58.068 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme2n1 : 1.03 9972.59 38.96 0.00 0.00 12659.39 10738.43 20318.79 00:11:58.068 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme2n2 : 1.03 9963.49 38.92 0.00 0.00 12650.16 10159.40 19897.68 00:11:58.068 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme2n3 : 1.03 9954.45 38.88 0.00 0.00 12628.90 8948.69 20634.63 00:11:58.068 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.068 Nvme3n1 : 1.03 9883.16 38.61 0.00 0.00 12692.16 8896.05 22003.25 00:11:58.068 [2024-11-25T20:30:06.204Z] =================================================================================================================== 00:11:58.068 [2024-11-25T20:30:06.204Z] Total : 69752.35 272.47 0.00 0.00 12686.52 8896.05 26530.24 00:11:59.469 00:11:59.469 real 0m3.483s 00:11:59.469 user 0m3.072s 00:11:59.469 sys 0m0.293s 00:11:59.469 20:30:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.469 20:30:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:59.469 ************************************ 00:11:59.469 END TEST bdev_write_zeroes 00:11:59.469 ************************************ 00:11:59.728 20:30:07 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.728 20:30:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:59.728 20:30:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.728 20:30:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:59.728 ************************************ 00:11:59.728 START TEST bdev_json_nonenclosed 00:11:59.728 ************************************ 00:11:59.728 20:30:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.728 [2024-11-25 20:30:07.720032] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:11:59.728 [2024-11-25 20:30:07.720176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:11:59.988 [2024-11-25 20:30:07.907904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.988 [2024-11-25 20:30:08.054939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.988 [2024-11-25 20:30:08.055051] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:59.988 [2024-11-25 20:30:08.055077] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:59.988 [2024-11-25 20:30:08.055090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:00.248 00:12:00.248 real 0m0.731s 00:12:00.248 user 0m0.451s 00:12:00.248 sys 0m0.175s 00:12:00.248 20:30:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.248 20:30:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:00.248 ************************************ 00:12:00.248 END TEST bdev_json_nonenclosed 00:12:00.248 ************************************ 00:12:00.507 20:30:08 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:00.507 20:30:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:00.507 20:30:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.507 20:30:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:00.507 ************************************ 00:12:00.507 START TEST bdev_json_nonarray 00:12:00.507 ************************************ 00:12:00.507 20:30:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:00.507 [2024-11-25 20:30:08.536601] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:12:00.507 [2024-11-25 20:30:08.536730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63617 ] 00:12:00.767 [2024-11-25 20:30:08.718362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.767 [2024-11-25 20:30:08.858979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.767 [2024-11-25 20:30:08.859110] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:00.767 [2024-11-25 20:30:08.859136] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:00.767 [2024-11-25 20:30:08.859150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.025 00:12:01.025 real 0m0.707s 00:12:01.025 user 0m0.434s 00:12:01.025 sys 0m0.168s 00:12:01.025 20:30:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.025 ************************************ 00:12:01.025 END TEST bdev_json_nonarray 00:12:01.025 ************************************ 00:12:01.025 20:30:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:01.286 20:30:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:12:01.286 20:30:09 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:12:01.286 20:30:09 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:12:01.286 20:30:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.286 20:30:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.286 20:30:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:01.286 ************************************ 00:12:01.286 START TEST bdev_gpt_uuid 00:12:01.286 ************************************ 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63648 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63648 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63648 ']' 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.286 20:30:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:01.286 [2024-11-25 20:30:09.345714] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:12:01.287 [2024-11-25 20:30:09.346054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63648 ] 00:12:01.548 [2024-11-25 20:30:09.527203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.548 [2024-11-25 20:30:09.681244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.929 20:30:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.929 20:30:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:12:02.929 20:30:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:02.929 20:30:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.929 20:30:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.188 Some configs were skipped because the RPC state that can call them passed over. 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:12:03.189 { 00:12:03.189 "name": "Nvme1n1p1", 00:12:03.189 "aliases": [ 00:12:03.189 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:03.189 ], 00:12:03.189 "product_name": "GPT Disk", 00:12:03.189 "block_size": 4096, 00:12:03.189 "num_blocks": 655104, 00:12:03.189 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:03.189 "assigned_rate_limits": { 00:12:03.189 "rw_ios_per_sec": 0, 00:12:03.189 "rw_mbytes_per_sec": 0, 00:12:03.189 "r_mbytes_per_sec": 0, 00:12:03.189 "w_mbytes_per_sec": 0 00:12:03.189 }, 00:12:03.189 "claimed": false, 00:12:03.189 "zoned": false, 00:12:03.189 "supported_io_types": { 00:12:03.189 "read": true, 00:12:03.189 "write": true, 00:12:03.189 "unmap": true, 00:12:03.189 "flush": true, 00:12:03.189 "reset": true, 00:12:03.189 "nvme_admin": false, 00:12:03.189 "nvme_io": false, 00:12:03.189 "nvme_io_md": false, 00:12:03.189 "write_zeroes": true, 00:12:03.189 "zcopy": false, 00:12:03.189 "get_zone_info": false, 00:12:03.189 "zone_management": false, 00:12:03.189 "zone_append": false, 00:12:03.189 "compare": true, 00:12:03.189 "compare_and_write": false, 00:12:03.189 "abort": true, 00:12:03.189 "seek_hole": false, 00:12:03.189 "seek_data": false, 00:12:03.189 "copy": true, 00:12:03.189 "nvme_iov_md": false 00:12:03.189 }, 00:12:03.189 "driver_specific": { 00:12:03.189 "gpt": { 00:12:03.189 "base_bdev": "Nvme1n1", 00:12:03.189 "offset_blocks": 256, 00:12:03.189 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:03.189 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:03.189 "partition_name": "SPDK_TEST_first" 00:12:03.189 } 00:12:03.189 } 00:12:03.189 } 00:12:03.189 ]' 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:12:03.189 { 00:12:03.189 "name": "Nvme1n1p2", 00:12:03.189 "aliases": [ 00:12:03.189 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:03.189 ], 00:12:03.189 "product_name": "GPT Disk", 00:12:03.189 "block_size": 4096, 00:12:03.189 "num_blocks": 655103, 00:12:03.189 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:03.189 "assigned_rate_limits": { 00:12:03.189 "rw_ios_per_sec": 0, 00:12:03.189 "rw_mbytes_per_sec": 0, 00:12:03.189 "r_mbytes_per_sec": 0, 00:12:03.189 "w_mbytes_per_sec": 0 00:12:03.189 }, 00:12:03.189 "claimed": false, 00:12:03.189 "zoned": false, 00:12:03.189 "supported_io_types": { 00:12:03.189 "read": true, 00:12:03.189 "write": true, 00:12:03.189 "unmap": true, 00:12:03.189 "flush": true, 00:12:03.189 "reset": true, 00:12:03.189 "nvme_admin": false, 00:12:03.189 "nvme_io": false, 00:12:03.189 "nvme_io_md": false, 00:12:03.189 "write_zeroes": true, 00:12:03.189 "zcopy": false, 00:12:03.189 "get_zone_info": false, 00:12:03.189 "zone_management": false, 00:12:03.189 "zone_append": false, 00:12:03.189 "compare": true, 00:12:03.189 "compare_and_write": false, 00:12:03.189 "abort": true, 00:12:03.189 "seek_hole": false, 00:12:03.189 "seek_data": false, 00:12:03.189 "copy": true, 00:12:03.189 "nvme_iov_md": false 00:12:03.189 }, 00:12:03.189 "driver_specific": { 00:12:03.189 "gpt": { 00:12:03.189 "base_bdev": "Nvme1n1", 00:12:03.189 "offset_blocks": 655360, 00:12:03.189 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:03.189 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:03.189 "partition_name": "SPDK_TEST_second" 00:12:03.189 } 00:12:03.189 } 00:12:03.189 } 00:12:03.189 ]' 00:12:03.189 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63648 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63648 ']' 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63648 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63648 00:12:03.449 killing process with pid 63648 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63648' 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63648 00:12:03.449 20:30:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63648 00:12:05.984 ************************************ 00:12:05.984 END TEST bdev_gpt_uuid 00:12:05.984 ************************************ 00:12:05.984 00:12:05.984 real 0m4.791s 00:12:05.984 user 0m4.672s 00:12:05.984 sys 0m0.755s 00:12:05.984 20:30:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.984 20:30:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:05.984 20:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:06.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.808 Waiting for block devices as requested 00:12:07.066 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.066 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.324 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.324 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.596 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:12.596 20:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:12.596 20:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:12.856 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:12.856 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:12.856 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:12.856 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:12.856 20:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:12.856 00:12:12.856 real 1m7.675s 00:12:12.856 user 1m22.529s 00:12:12.856 sys 0m13.392s 00:12:12.856 ************************************ 00:12:12.856 END TEST blockdev_nvme_gpt 00:12:12.856 ************************************ 00:12:12.856 20:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.856 20:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:12.856 20:30:20 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:12.856 20:30:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.856 20:30:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.856 20:30:20 -- common/autotest_common.sh@10 -- # set +x 00:12:12.856 ************************************ 00:12:12.856 START TEST nvme 00:12:12.856 ************************************ 00:12:12.856 20:30:20 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:12.856 * Looking for test storage... 00:12:12.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:12.856 20:30:20 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.856 20:30:20 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.856 20:30:20 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:13.114 20:30:21 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:13.114 20:30:21 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.114 20:30:21 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.114 20:30:21 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.115 20:30:21 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.115 20:30:21 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.115 20:30:21 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.115 20:30:21 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.115 20:30:21 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.115 20:30:21 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.115 20:30:21 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:13.115 20:30:21 nvme -- scripts/common.sh@345 -- # : 1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.115 20:30:21 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.115 20:30:21 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@353 -- # local d=1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.115 20:30:21 nvme -- scripts/common.sh@355 -- # echo 1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.115 20:30:21 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:13.115 20:30:21 nvme -- scripts/common.sh@353 -- # local d=2 00:12:13.115 20:30:21 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.115 20:30:21 nvme -- scripts/common.sh@355 -- # echo 2 00:12:13.115 20:30:21 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.115 20:30:21 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.115 20:30:21 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.115 20:30:21 nvme -- scripts/common.sh@368 -- # return 0 00:12:13.115 20:30:21 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.115 20:30:21 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:13.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.115 --rc genhtml_branch_coverage=1 00:12:13.115 --rc genhtml_function_coverage=1 00:12:13.115 --rc genhtml_legend=1 00:12:13.115 --rc geninfo_all_blocks=1 00:12:13.115 --rc geninfo_unexecuted_blocks=1 00:12:13.115 00:12:13.115 ' 00:12:13.115 20:30:21 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:13.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.115 --rc genhtml_branch_coverage=1 00:12:13.115 --rc genhtml_function_coverage=1 00:12:13.115 --rc genhtml_legend=1 00:12:13.115 --rc geninfo_all_blocks=1 00:12:13.115 --rc geninfo_unexecuted_blocks=1 00:12:13.115 00:12:13.115 ' 00:12:13.115 20:30:21 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:13.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.115 --rc genhtml_branch_coverage=1 00:12:13.115 --rc genhtml_function_coverage=1 00:12:13.115 --rc genhtml_legend=1 00:12:13.115 --rc geninfo_all_blocks=1 00:12:13.115 --rc geninfo_unexecuted_blocks=1 00:12:13.115 00:12:13.115 ' 00:12:13.115 20:30:21 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:13.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.115 --rc genhtml_branch_coverage=1 00:12:13.115 --rc genhtml_function_coverage=1 00:12:13.115 --rc genhtml_legend=1 00:12:13.115 --rc geninfo_all_blocks=1 00:12:13.115 --rc geninfo_unexecuted_blocks=1 00:12:13.115 00:12:13.115 ' 00:12:13.115 20:30:21 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:13.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.618 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.618 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.618 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.618 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.618 20:30:22 nvme -- nvme/nvme.sh@79 -- # uname 00:12:14.618 20:30:22 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:14.618 20:30:22 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:14.618 20:30:22 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:12:14.618 Waiting for stub to ready for secondary processes... 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1075 -- # stubpid=64315 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64315 ]] 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:14.618 20:30:22 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:14.618 [2024-11-25 20:30:22.659448] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:12:14.618 [2024-11-25 20:30:22.659702] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:15.553 20:30:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:15.553 20:30:23 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64315 ]] 00:12:15.553 20:30:23 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:15.553 [2024-11-25 20:30:23.677725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.833 [2024-11-25 20:30:23.793610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.833 [2024-11-25 20:30:23.793762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.833 [2024-11-25 20:30:23.793796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.833 [2024-11-25 20:30:23.813168] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:15.833 [2024-11-25 20:30:23.813392] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.833 [2024-11-25 20:30:23.830355] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:15.833 [2024-11-25 20:30:23.830577] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:15.833 [2024-11-25 20:30:23.834417] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.833 [2024-11-25 20:30:23.834995] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:15.833 [2024-11-25 20:30:23.835254] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:15.833 [2024-11-25 20:30:23.839073] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.833 [2024-11-25 20:30:23.839547] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:15.833 [2024-11-25 20:30:23.839707] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:15.833 [2024-11-25 20:30:23.843350] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.833 [2024-11-25 20:30:23.843602] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:15.833 [2024-11-25 20:30:23.843880] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:15.833 [2024-11-25 20:30:23.844211] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:15.833 [2024-11-25 20:30:23.844348] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:16.770 20:30:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:16.770 20:30:24 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:12:16.770 done. 00:12:16.770 20:30:24 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:16.770 20:30:24 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:16.770 20:30:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.770 20:30:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.770 ************************************ 00:12:16.770 START TEST nvme_reset 00:12:16.770 ************************************ 00:12:16.770 20:30:24 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:17.029 Initializing NVMe Controllers 00:12:17.029 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:17.029 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:17.029 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:17.029 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:17.029 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:17.029 ************************************ 00:12:17.029 END TEST nvme_reset 00:12:17.029 ************************************ 00:12:17.029 00:12:17.029 real 0m0.309s 00:12:17.029 user 0m0.112s 00:12:17.029 sys 0m0.151s 00:12:17.029 20:30:24 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.029 20:30:24 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 20:30:25 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:17.029 20:30:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.029 20:30:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.029 20:30:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 ************************************ 00:12:17.029 START TEST nvme_identify 00:12:17.029 ************************************ 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:12:17.029 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:17.029 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:17.029 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:17.029 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:17.029 20:30:25 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:17.029 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:17.291 [2024-11-25 20:30:25.388469] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64348 terminated unexpected 00:12:17.291 ===================================================== 00:12:17.291 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:17.291 ===================================================== 00:12:17.291 Controller Capabilities/Features 00:12:17.291 ================================ 00:12:17.291 Vendor ID: 1b36 00:12:17.291 Subsystem Vendor ID: 1af4 00:12:17.291 Serial Number: 12340 00:12:17.291 Model Number: QEMU NVMe Ctrl 00:12:17.291 Firmware Version: 8.0.0 00:12:17.291 Recommended Arb Burst: 6 00:12:17.291 IEEE OUI Identifier: 00 54 52 00:12:17.291 Multi-path I/O 00:12:17.291 May have multiple subsystem ports: No 00:12:17.291 May have multiple controllers: No 00:12:17.291 Associated with SR-IOV VF: No 00:12:17.291 Max Data Transfer Size: 524288 00:12:17.291 Max Number of Namespaces: 256 00:12:17.291 Max Number of I/O Queues: 64 00:12:17.291 NVMe Specification Version (VS): 1.4 00:12:17.291 NVMe Specification Version (Identify): 1.4 00:12:17.291 Maximum Queue Entries: 2048 00:12:17.291 Contiguous Queues Required: Yes 00:12:17.291 Arbitration Mechanisms Supported 00:12:17.291 Weighted Round Robin: Not Supported 00:12:17.291 Vendor Specific: Not Supported 00:12:17.291 Reset Timeout: 7500 ms 00:12:17.291 Doorbell Stride: 4 bytes 00:12:17.291 NVM Subsystem Reset: Not Supported 00:12:17.291 Command Sets Supported 00:12:17.291 NVM Command Set: Supported 00:12:17.291 Boot Partition: Not Supported 00:12:17.291 Memory Page Size Minimum: 4096 bytes 00:12:17.291 Memory Page Size Maximum: 65536 bytes 00:12:17.291 Persistent Memory Region: Not Supported 00:12:17.291 Optional Asynchronous Events Supported 00:12:17.291 Namespace Attribute Notices: Supported 00:12:17.291 Firmware Activation Notices: Not Supported 00:12:17.291 ANA Change Notices: Not Supported 00:12:17.291 PLE Aggregate Log Change Notices: Not Supported 00:12:17.291 LBA Status Info Alert Notices: Not Supported 00:12:17.291 EGE Aggregate Log Change Notices: Not Supported 00:12:17.291 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.291 Zone Descriptor Change Notices: Not Supported 00:12:17.291 Discovery Log Change Notices: Not Supported 00:12:17.291 Controller Attributes 00:12:17.291 128-bit Host Identifier: Not Supported 00:12:17.291 Non-Operational Permissive Mode: Not Supported 00:12:17.291 NVM Sets: Not Supported 00:12:17.291 Read Recovery Levels: Not Supported 00:12:17.291 Endurance Groups: Not Supported 00:12:17.292 Predictable Latency Mode: Not Supported 00:12:17.292 Traffic Based Keep ALive: Not Supported 00:12:17.292 Namespace Granularity: Not Supported 00:12:17.292 SQ Associations: Not Supported 00:12:17.292 UUID List: Not Supported 00:12:17.292 Multi-Domain Subsystem: Not Supported 00:12:17.292 Fixed Capacity Management: Not Supported 00:12:17.292 Variable Capacity Management: Not Supported 00:12:17.292 Delete Endurance Group: Not Supported 00:12:17.292 Delete NVM Set: Not Supported 00:12:17.292 Extended LBA Formats Supported: Supported 00:12:17.292 Flexible Data Placement Supported: Not Supported 00:12:17.292 00:12:17.292 Controller Memory Buffer Support 00:12:17.292 ================================ 00:12:17.292 Supported: No 00:12:17.292 00:12:17.292 Persistent Memory Region Support 00:12:17.292 ================================ 00:12:17.292 Supported: No 00:12:17.292 00:12:17.292 Admin Command Set Attributes 00:12:17.292 ============================ 00:12:17.292 Security Send/Receive: Not Supported 00:12:17.292 Format NVM: Supported 00:12:17.292 Firmware Activate/Download: Not Supported 00:12:17.292 Namespace Management: Supported 00:12:17.292 Device Self-Test: Not Supported 00:12:17.292 Directives: Supported 00:12:17.292 NVMe-MI: Not Supported 00:12:17.292 Virtualization Management: Not Supported 00:12:17.292 Doorbell Buffer Config: Supported 00:12:17.292 Get LBA Status Capability: Not Supported 00:12:17.292 Command & Feature Lockdown Capability: Not Supported 00:12:17.292 Abort Command Limit: 4 00:12:17.292 Async Event Request Limit: 4 00:12:17.292 Number of Firmware Slots: N/A 00:12:17.292 Firmware Slot 1 Read-Only: N/A 00:12:17.292 Firmware Activation Without Reset: N/A 00:12:17.292 Multiple Update Detection Support: N/A 00:12:17.292 Firmware Update Granularity: No Information Provided 00:12:17.292 Per-Namespace SMART Log: Yes 00:12:17.292 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.292 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:17.292 Command Effects Log Page: Supported 00:12:17.292 Get Log Page Extended Data: Supported 00:12:17.292 Telemetry Log Pages: Not Supported 00:12:17.292 Persistent Event Log Pages: Not Supported 00:12:17.292 Supported Log Pages Log Page: May Support 00:12:17.292 Commands Supported & Effects Log Page: Not Supported 00:12:17.292 Feature Identifiers & Effects Log Page:May Support 00:12:17.292 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.292 Data Area 4 for Telemetry Log: Not Supported 00:12:17.292 Error Log Page Entries Supported: 1 00:12:17.292 Keep Alive: Not Supported 00:12:17.292 00:12:17.292 NVM Command Set Attributes 00:12:17.292 ========================== 00:12:17.292 Submission Queue Entry Size 00:12:17.292 Max: 64 00:12:17.292 Min: 64 00:12:17.292 Completion Queue Entry Size 00:12:17.292 Max: 16 00:12:17.292 Min: 16 00:12:17.292 Number of Namespaces: 256 00:12:17.292 Compare Command: Supported 00:12:17.292 Write Uncorrectable Command: Not Supported 00:12:17.292 Dataset Management Command: Supported 00:12:17.292 Write Zeroes Command: Supported 00:12:17.292 Set Features Save Field: Supported 00:12:17.292 Reservations: Not Supported 00:12:17.292 Timestamp: Supported 00:12:17.292 Copy: Supported 00:12:17.292 Volatile Write Cache: Present 00:12:17.292 Atomic Write Unit (Normal): 1 00:12:17.292 Atomic Write Unit (PFail): 1 00:12:17.292 Atomic Compare & Write Unit: 1 00:12:17.292 Fused Compare & Write: Not Supported 00:12:17.292 Scatter-Gather List 00:12:17.292 SGL Command Set: Supported 00:12:17.292 SGL Keyed: Not Supported 00:12:17.292 SGL Bit Bucket Descriptor: Not Supported 00:12:17.292 SGL Metadata Pointer: Not Supported 00:12:17.292 Oversized SGL: Not Supported 00:12:17.292 SGL Metadata Address: Not Supported 00:12:17.292 SGL Offset: Not Supported 00:12:17.292 Transport SGL Data Block: Not Supported 00:12:17.292 Replay Protected Memory Block: Not Supported 00:12:17.292 00:12:17.292 Firmware Slot Information 00:12:17.292 ========================= 00:12:17.292 Active slot: 1 00:12:17.292 Slot 1 Firmware Revision: 1.0 00:12:17.292 00:12:17.292 00:12:17.292 Commands Supported and Effects 00:12:17.292 ============================== 00:12:17.292 Admin Commands 00:12:17.292 -------------- 00:12:17.292 Delete I/O Submission Queue (00h): Supported 00:12:17.292 Create I/O Submission Queue (01h): Supported 00:12:17.292 Get Log Page (02h): Supported 00:12:17.292 Delete I/O Completion Queue (04h): Supported 00:12:17.292 Create I/O Completion Queue (05h): Supported 00:12:17.292 Identify (06h): Supported 00:12:17.292 Abort (08h): Supported 00:12:17.292 Set Features (09h): Supported 00:12:17.292 Get Features (0Ah): Supported 00:12:17.292 Asynchronous Event Request (0Ch): Supported 00:12:17.292 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.292 Directive Send (19h): Supported 00:12:17.292 Directive Receive (1Ah): Supported 00:12:17.292 Virtualization Management (1Ch): Supported 00:12:17.292 Doorbell Buffer Config (7Ch): Supported 00:12:17.292 Format NVM (80h): Supported LBA-Change 00:12:17.292 I/O Commands 00:12:17.292 ------------ 00:12:17.292 Flush (00h): Supported LBA-Change 00:12:17.292 Write (01h): Supported LBA-Change 00:12:17.292 Read (02h): Supported 00:12:17.292 Compare (05h): Supported 00:12:17.292 Write Zeroes (08h): Supported LBA-Change 00:12:17.292 Dataset Management (09h): Supported LBA-Change 00:12:17.292 Unknown (0Ch): Supported 00:12:17.292 Unknown (12h): Supported 00:12:17.292 Copy (19h): Supported LBA-Change 00:12:17.292 Unknown (1Dh): Supported LBA-Change 00:12:17.292 00:12:17.292 Error Log 00:12:17.292 ========= 00:12:17.292 00:12:17.292 Arbitration 00:12:17.292 =========== 00:12:17.292 Arbitration Burst: no limit 00:12:17.292 00:12:17.292 Power Management 00:12:17.292 ================ 00:12:17.292 Number of Power States: 1 00:12:17.292 Current Power State: Power State #0 00:12:17.292 Power State #0: 00:12:17.292 Max Power: 25.00 W 00:12:17.292 Non-Operational State: Operational 00:12:17.292 Entry Latency: 16 microseconds 00:12:17.292 Exit Latency: 4 microseconds 00:12:17.292 Relative Read Throughput: 0 00:12:17.292 Relative Read Latency: 0 00:12:17.292 Relative Write Throughput: 0 00:12:17.292 Relative Write Latency: 0 00:12:17.292 Idle Power[2024-11-25 20:30:25.389835] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64348 terminated unexpected 00:12:17.292 : Not Reported 00:12:17.292 Active Power: Not Reported 00:12:17.292 Non-Operational Permissive Mode: Not Supported 00:12:17.292 00:12:17.292 Health Information 00:12:17.292 ================== 00:12:17.292 Critical Warnings: 00:12:17.292 Available Spare Space: OK 00:12:17.292 Temperature: OK 00:12:17.292 Device Reliability: OK 00:12:17.292 Read Only: No 00:12:17.292 Volatile Memory Backup: OK 00:12:17.292 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.292 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.292 Available Spare: 0% 00:12:17.292 Available Spare Threshold: 0% 00:12:17.292 Life Percentage Used: 0% 00:12:17.292 Data Units Read: 743 00:12:17.292 Data Units Written: 671 00:12:17.292 Host Read Commands: 35101 00:12:17.292 Host Write Commands: 34887 00:12:17.292 Controller Busy Time: 0 minutes 00:12:17.292 Power Cycles: 0 00:12:17.292 Power On Hours: 0 hours 00:12:17.292 Unsafe Shutdowns: 0 00:12:17.292 Unrecoverable Media Errors: 0 00:12:17.292 Lifetime Error Log Entries: 0 00:12:17.292 Warning Temperature Time: 0 minutes 00:12:17.292 Critical Temperature Time: 0 minutes 00:12:17.292 00:12:17.292 Number of Queues 00:12:17.292 ================ 00:12:17.292 Number of I/O Submission Queues: 64 00:12:17.292 Number of I/O Completion Queues: 64 00:12:17.292 00:12:17.292 ZNS Specific Controller Data 00:12:17.292 ============================ 00:12:17.292 Zone Append Size Limit: 0 00:12:17.292 00:12:17.292 00:12:17.292 Active Namespaces 00:12:17.292 ================= 00:12:17.292 Namespace ID:1 00:12:17.292 Error Recovery Timeout: Unlimited 00:12:17.292 Command Set Identifier: NVM (00h) 00:12:17.292 Deallocate: Supported 00:12:17.292 Deallocated/Unwritten Error: Supported 00:12:17.292 Deallocated Read Value: All 0x00 00:12:17.292 Deallocate in Write Zeroes: Not Supported 00:12:17.292 Deallocated Guard Field: 0xFFFF 00:12:17.292 Flush: Supported 00:12:17.292 Reservation: Not Supported 00:12:17.292 Metadata Transferred as: Separate Metadata Buffer 00:12:17.292 Namespace Sharing Capabilities: Private 00:12:17.292 Size (in LBAs): 1548666 (5GiB) 00:12:17.292 Capacity (in LBAs): 1548666 (5GiB) 00:12:17.292 Utilization (in LBAs): 1548666 (5GiB) 00:12:17.292 Thin Provisioning: Not Supported 00:12:17.292 Per-NS Atomic Units: No 00:12:17.293 Maximum Single Source Range Length: 128 00:12:17.293 Maximum Copy Length: 128 00:12:17.293 Maximum Source Range Count: 128 00:12:17.293 NGUID/EUI64 Never Reused: No 00:12:17.293 Namespace Write Protected: No 00:12:17.293 Number of LBA Formats: 8 00:12:17.293 Current LBA Format: LBA Format #07 00:12:17.293 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.293 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.293 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.293 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.293 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.293 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.293 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.293 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.293 00:12:17.293 NVM Specific Namespace Data 00:12:17.293 =========================== 00:12:17.293 Logical Block Storage Tag Mask: 0 00:12:17.293 Protection Information Capabilities: 00:12:17.293 16b Guard Protection Information Storage Tag Support: No 00:12:17.293 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.293 Storage Tag Check Read Support: No 00:12:17.293 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.293 ===================================================== 00:12:17.293 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:17.293 ===================================================== 00:12:17.293 Controller Capabilities/Features 00:12:17.293 ================================ 00:12:17.293 Vendor ID: 1b36 00:12:17.293 Subsystem Vendor ID: 1af4 00:12:17.293 Serial Number: 12341 00:12:17.293 Model Number: QEMU NVMe Ctrl 00:12:17.293 Firmware Version: 8.0.0 00:12:17.293 Recommended Arb Burst: 6 00:12:17.293 IEEE OUI Identifier: 00 54 52 00:12:17.293 Multi-path I/O 00:12:17.293 May have multiple subsystem ports: No 00:12:17.293 May have multiple controllers: No 00:12:17.293 Associated with SR-IOV VF: No 00:12:17.293 Max Data Transfer Size: 524288 00:12:17.293 Max Number of Namespaces: 256 00:12:17.293 Max Number of I/O Queues: 64 00:12:17.293 NVMe Specification Version (VS): 1.4 00:12:17.293 NVMe Specification Version (Identify): 1.4 00:12:17.293 Maximum Queue Entries: 2048 00:12:17.293 Contiguous Queues Required: Yes 00:12:17.293 Arbitration Mechanisms Supported 00:12:17.293 Weighted Round Robin: Not Supported 00:12:17.293 Vendor Specific: Not Supported 00:12:17.293 Reset Timeout: 7500 ms 00:12:17.293 Doorbell Stride: 4 bytes 00:12:17.293 NVM Subsystem Reset: Not Supported 00:12:17.293 Command Sets Supported 00:12:17.293 NVM Command Set: Supported 00:12:17.293 Boot Partition: Not Supported 00:12:17.293 Memory Page Size Minimum: 4096 bytes 00:12:17.293 Memory Page Size Maximum: 65536 bytes 00:12:17.293 Persistent Memory Region: Not Supported 00:12:17.293 Optional Asynchronous Events Supported 00:12:17.293 Namespace Attribute Notices: Supported 00:12:17.293 Firmware Activation Notices: Not Supported 00:12:17.293 ANA Change Notices: Not Supported 00:12:17.293 PLE Aggregate Log Change Notices: Not Supported 00:12:17.293 LBA Status Info Alert Notices: Not Supported 00:12:17.293 EGE Aggregate Log Change Notices: Not Supported 00:12:17.293 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.293 Zone Descriptor Change Notices: Not Supported 00:12:17.293 Discovery Log Change Notices: Not Supported 00:12:17.293 Controller Attributes 00:12:17.293 128-bit Host Identifier: Not Supported 00:12:17.293 Non-Operational Permissive Mode: Not Supported 00:12:17.293 NVM Sets: Not Supported 00:12:17.293 Read Recovery Levels: Not Supported 00:12:17.293 Endurance Groups: Not Supported 00:12:17.293 Predictable Latency Mode: Not Supported 00:12:17.293 Traffic Based Keep ALive: Not Supported 00:12:17.293 Namespace Granularity: Not Supported 00:12:17.293 SQ Associations: Not Supported 00:12:17.293 UUID List: Not Supported 00:12:17.293 Multi-Domain Subsystem: Not Supported 00:12:17.293 Fixed Capacity Management: Not Supported 00:12:17.293 Variable Capacity Management: Not Supported 00:12:17.293 Delete Endurance Group: Not Supported 00:12:17.293 Delete NVM Set: Not Supported 00:12:17.293 Extended LBA Formats Supported: Supported 00:12:17.293 Flexible Data Placement Supported: Not Supported 00:12:17.293 00:12:17.293 Controller Memory Buffer Support 00:12:17.293 ================================ 00:12:17.293 Supported: No 00:12:17.293 00:12:17.293 Persistent Memory Region Support 00:12:17.293 ================================ 00:12:17.293 Supported: No 00:12:17.293 00:12:17.293 Admin Command Set Attributes 00:12:17.293 ============================ 00:12:17.293 Security Send/Receive: Not Supported 00:12:17.293 Format NVM: Supported 00:12:17.293 Firmware Activate/Download: Not Supported 00:12:17.293 Namespace Management: Supported 00:12:17.293 Device Self-Test: Not Supported 00:12:17.293 Directives: Supported 00:12:17.293 NVMe-MI: Not Supported 00:12:17.293 Virtualization Management: Not Supported 00:12:17.293 Doorbell Buffer Config: Supported 00:12:17.293 Get LBA Status Capability: Not Supported 00:12:17.293 Command & Feature Lockdown Capability: Not Supported 00:12:17.293 Abort Command Limit: 4 00:12:17.293 Async Event Request Limit: 4 00:12:17.293 Number of Firmware Slots: N/A 00:12:17.293 Firmware Slot 1 Read-Only: N/A 00:12:17.293 Firmware Activation Without Reset: N/A 00:12:17.293 Multiple Update Detection Support: N/A 00:12:17.293 Firmware Update Granularity: No Information Provided 00:12:17.293 Per-Namespace SMART Log: Yes 00:12:17.293 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.293 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:17.293 Command Effects Log Page: Supported 00:12:17.293 Get Log Page Extended Data: Supported 00:12:17.293 Telemetry Log Pages: Not Supported 00:12:17.293 Persistent Event Log Pages: Not Supported 00:12:17.293 Supported Log Pages Log Page: May Support 00:12:17.293 Commands Supported & Effects Log Page: Not Supported 00:12:17.293 Feature Identifiers & Effects Log Page:May Support 00:12:17.293 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.293 Data Area 4 for Telemetry Log: Not Supported 00:12:17.293 Error Log Page Entries Supported: 1 00:12:17.293 Keep Alive: Not Supported 00:12:17.293 00:12:17.293 NVM Command Set Attributes 00:12:17.293 ========================== 00:12:17.293 Submission Queue Entry Size 00:12:17.293 Max: 64 00:12:17.293 Min: 64 00:12:17.293 Completion Queue Entry Size 00:12:17.293 Max: 16 00:12:17.293 Min: 16 00:12:17.293 Number of Namespaces: 256 00:12:17.293 Compare Command: Supported 00:12:17.294 Write Uncorrectable Command: Not Supported 00:12:17.294 Dataset Management Command: Supported 00:12:17.294 Write Zeroes Command: Supported 00:12:17.294 Set Features Save Field: Supported 00:12:17.294 Reservations: Not Supported 00:12:17.294 Timestamp: Supported 00:12:17.294 Copy: Supported 00:12:17.294 Volatile Write Cache: Present 00:12:17.294 Atomic Write Unit (Normal): 1 00:12:17.294 Atomic Write Unit (PFail): 1 00:12:17.294 Atomic Compare & Write Unit: 1 00:12:17.294 Fused Compare & Write: Not Supported 00:12:17.294 Scatter-Gather List 00:12:17.294 SGL Command Set: Supported 00:12:17.294 SGL Keyed: Not Supported 00:12:17.294 SGL Bit Bucket Descriptor: Not Supported 00:12:17.294 SGL Metadata Pointer: Not Supported 00:12:17.294 Oversized SGL: Not Supported 00:12:17.294 SGL Metadata Address: Not Supported 00:12:17.294 SGL Offset: Not Supported 00:12:17.294 Transport SGL Data Block: Not Supported 00:12:17.294 Replay Protected Memory Block: Not Supported 00:12:17.294 00:12:17.294 Firmware Slot Information 00:12:17.294 ========================= 00:12:17.294 Active slot: 1 00:12:17.294 Slot 1 Firmware Revision: 1.0 00:12:17.294 00:12:17.294 00:12:17.294 Commands Supported and Effects 00:12:17.294 ============================== 00:12:17.294 Admin Commands 00:12:17.294 -------------- 00:12:17.294 Delete I/O Submission Queue (00h): Supported 00:12:17.294 Create I/O Submission Queue (01h): Supported 00:12:17.294 Get Log Page (02h): Supported 00:12:17.294 Delete I/O Completion Queue (04h): Supported 00:12:17.294 Create I/O Completion Queue (05h): Supported 00:12:17.294 Identify (06h): Supported 00:12:17.294 Abort (08h): Supported 00:12:17.294 Set Features (09h): Supported 00:12:17.294 Get Features (0Ah): Supported 00:12:17.294 Asynchronous Event Request (0Ch): Supported 00:12:17.294 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.294 Directive Send (19h): Supported 00:12:17.294 Directive Receive (1Ah): Supported 00:12:17.294 Virtualization Management (1Ch): Supported 00:12:17.294 Doorbell Buffer Config (7Ch): Supported 00:12:17.294 Format NVM (80h): Supported LBA-Change 00:12:17.294 I/O Commands 00:12:17.294 ------------ 00:12:17.294 Flush (00h): Supported LBA-Change 00:12:17.294 Write (01h): Supported LBA-Change 00:12:17.294 Read (02h): Supported 00:12:17.294 Compare (05h): Supported 00:12:17.294 Write Zeroes (08h): Supported LBA-Change 00:12:17.294 Dataset Management (09h): Supported LBA-Change 00:12:17.294 Unknown (0Ch): Supported 00:12:17.294 Unknown (12h): Supported 00:12:17.294 Copy (19h): Supported LBA-Change 00:12:17.294 Unknown (1Dh): Supported LBA-Change 00:12:17.294 00:12:17.294 Error Log 00:12:17.294 ========= 00:12:17.294 00:12:17.294 Arbitration 00:12:17.294 =========== 00:12:17.294 Arbitration Burst: no limit 00:12:17.294 00:12:17.294 Power Management 00:12:17.294 ================ 00:12:17.294 Number of Power States: 1 00:12:17.294 Current Power State: Power State #0 00:12:17.294 Power State #0: 00:12:17.294 Max Power: 25.00 W 00:12:17.294 Non-Operational State: Operational 00:12:17.294 Entry Latency: 16 microseconds 00:12:17.294 Exit Latency: 4 microseconds 00:12:17.294 Relative Read Throughput: 0 00:12:17.294 Relative Read Latency: 0 00:12:17.294 Relative Write Throughput: 0 00:12:17.294 Relative Write Latency: 0 00:12:17.294 Idle Power: Not Reported 00:12:17.294 Active Power: Not Reported 00:12:17.294 Non-Operational Permissive Mode: Not Supported 00:12:17.294 00:12:17.294 Health Information 00:12:17.294 ================== 00:12:17.294 Critical Warnings: 00:12:17.294 Available Spare Space: OK 00:12:17.294 Temperature: [2024-11-25 20:30:25.390691] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64348 terminated unexpected 00:12:17.294 OK 00:12:17.294 Device Reliability: OK 00:12:17.294 Read Only: No 00:12:17.294 Volatile Memory Backup: OK 00:12:17.294 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.294 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.294 Available Spare: 0% 00:12:17.294 Available Spare Threshold: 0% 00:12:17.294 Life Percentage Used: 0% 00:12:17.294 Data Units Read: 1167 00:12:17.294 Data Units Written: 1034 00:12:17.294 Host Read Commands: 54183 00:12:17.294 Host Write Commands: 52966 00:12:17.294 Controller Busy Time: 0 minutes 00:12:17.294 Power Cycles: 0 00:12:17.294 Power On Hours: 0 hours 00:12:17.294 Unsafe Shutdowns: 0 00:12:17.294 Unrecoverable Media Errors: 0 00:12:17.294 Lifetime Error Log Entries: 0 00:12:17.294 Warning Temperature Time: 0 minutes 00:12:17.294 Critical Temperature Time: 0 minutes 00:12:17.294 00:12:17.294 Number of Queues 00:12:17.294 ================ 00:12:17.294 Number of I/O Submission Queues: 64 00:12:17.294 Number of I/O Completion Queues: 64 00:12:17.294 00:12:17.294 ZNS Specific Controller Data 00:12:17.294 ============================ 00:12:17.294 Zone Append Size Limit: 0 00:12:17.294 00:12:17.294 00:12:17.294 Active Namespaces 00:12:17.294 ================= 00:12:17.294 Namespace ID:1 00:12:17.294 Error Recovery Timeout: Unlimited 00:12:17.295 Command Set Identifier: NVM (00h) 00:12:17.295 Deallocate: Supported 00:12:17.295 Deallocated/Unwritten Error: Supported 00:12:17.295 Deallocated Read Value: All 0x00 00:12:17.295 Deallocate in Write Zeroes: Not Supported 00:12:17.295 Deallocated Guard Field: 0xFFFF 00:12:17.295 Flush: Supported 00:12:17.295 Reservation: Not Supported 00:12:17.295 Namespace Sharing Capabilities: Private 00:12:17.295 Size (in LBAs): 1310720 (5GiB) 00:12:17.295 Capacity (in LBAs): 1310720 (5GiB) 00:12:17.295 Utilization (in LBAs): 1310720 (5GiB) 00:12:17.295 Thin Provisioning: Not Supported 00:12:17.295 Per-NS Atomic Units: No 00:12:17.295 Maximum Single Source Range Length: 128 00:12:17.295 Maximum Copy Length: 128 00:12:17.295 Maximum Source Range Count: 128 00:12:17.295 NGUID/EUI64 Never Reused: No 00:12:17.295 Namespace Write Protected: No 00:12:17.295 Number of LBA Formats: 8 00:12:17.295 Current LBA Format: LBA Format #04 00:12:17.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.295 00:12:17.295 NVM Specific Namespace Data 00:12:17.295 =========================== 00:12:17.295 Logical Block Storage Tag Mask: 0 00:12:17.295 Protection Information Capabilities: 00:12:17.295 16b Guard Protection Information Storage Tag Support: No 00:12:17.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.295 Storage Tag Check Read Support: No 00:12:17.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.295 ===================================================== 00:12:17.295 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:17.295 ===================================================== 00:12:17.295 Controller Capabilities/Features 00:12:17.295 ================================ 00:12:17.295 Vendor ID: 1b36 00:12:17.295 Subsystem Vendor ID: 1af4 00:12:17.295 Serial Number: 12343 00:12:17.295 Model Number: QEMU NVMe Ctrl 00:12:17.295 Firmware Version: 8.0.0 00:12:17.295 Recommended Arb Burst: 6 00:12:17.295 IEEE OUI Identifier: 00 54 52 00:12:17.295 Multi-path I/O 00:12:17.295 May have multiple subsystem ports: No 00:12:17.295 May have multiple controllers: Yes 00:12:17.295 Associated with SR-IOV VF: No 00:12:17.295 Max Data Transfer Size: 524288 00:12:17.295 Max Number of Namespaces: 256 00:12:17.295 Max Number of I/O Queues: 64 00:12:17.295 NVMe Specification Version (VS): 1.4 00:12:17.295 NVMe Specification Version (Identify): 1.4 00:12:17.295 Maximum Queue Entries: 2048 00:12:17.295 Contiguous Queues Required: Yes 00:12:17.295 Arbitration Mechanisms Supported 00:12:17.295 Weighted Round Robin: Not Supported 00:12:17.295 Vendor Specific: Not Supported 00:12:17.295 Reset Timeout: 7500 ms 00:12:17.295 Doorbell Stride: 4 bytes 00:12:17.295 NVM Subsystem Reset: Not Supported 00:12:17.295 Command Sets Supported 00:12:17.295 NVM Command Set: Supported 00:12:17.295 Boot Partition: Not Supported 00:12:17.295 Memory Page Size Minimum: 4096 bytes 00:12:17.295 Memory Page Size Maximum: 65536 bytes 00:12:17.295 Persistent Memory Region: Not Supported 00:12:17.295 Optional Asynchronous Events Supported 00:12:17.295 Namespace Attribute Notices: Supported 00:12:17.295 Firmware Activation Notices: Not Supported 00:12:17.295 ANA Change Notices: Not Supported 00:12:17.295 PLE Aggregate Log Change Notices: Not Supported 00:12:17.295 LBA Status Info Alert Notices: Not Supported 00:12:17.295 EGE Aggregate Log Change Notices: Not Supported 00:12:17.295 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.295 Zone Descriptor Change Notices: Not Supported 00:12:17.295 Discovery Log Change Notices: Not Supported 00:12:17.295 Controller Attributes 00:12:17.295 128-bit Host Identifier: Not Supported 00:12:17.295 Non-Operational Permissive Mode: Not Supported 00:12:17.295 NVM Sets: Not Supported 00:12:17.295 Read Recovery Levels: Not Supported 00:12:17.295 Endurance Groups: Supported 00:12:17.295 Predictable Latency Mode: Not Supported 00:12:17.295 Traffic Based Keep ALive: Not Supported 00:12:17.295 Namespace Granularity: Not Supported 00:12:17.295 SQ Associations: Not Supported 00:12:17.295 UUID List: Not Supported 00:12:17.295 Multi-Domain Subsystem: Not Supported 00:12:17.295 Fixed Capacity Management: Not Supported 00:12:17.295 Variable Capacity Management: Not Supported 00:12:17.295 Delete Endurance Group: Not Supported 00:12:17.295 Delete NVM Set: Not Supported 00:12:17.295 Extended LBA Formats Supported: Supported 00:12:17.295 Flexible Data Placement Supported: Supported 00:12:17.295 00:12:17.295 Controller Memory Buffer Support 00:12:17.295 ================================ 00:12:17.295 Supported: No 00:12:17.295 00:12:17.295 Persistent Memory Region Support 00:12:17.296 ================================ 00:12:17.296 Supported: No 00:12:17.296 00:12:17.296 Admin Command Set Attributes 00:12:17.296 ============================ 00:12:17.296 Security Send/Receive: Not Supported 00:12:17.296 Format NVM: Supported 00:12:17.296 Firmware Activate/Download: Not Supported 00:12:17.296 Namespace Management: Supported 00:12:17.296 Device Self-Test: Not Supported 00:12:17.296 Directives: Supported 00:12:17.296 NVMe-MI: Not Supported 00:12:17.296 Virtualization Management: Not Supported 00:12:17.296 Doorbell Buffer Config: Supported 00:12:17.296 Get LBA Status Capability: Not Supported 00:12:17.296 Command & Feature Lockdown Capability: Not Supported 00:12:17.296 Abort Command Limit: 4 00:12:17.296 Async Event Request Limit: 4 00:12:17.296 Number of Firmware Slots: N/A 00:12:17.296 Firmware Slot 1 Read-Only: N/A 00:12:17.296 Firmware Activation Without Reset: N/A 00:12:17.296 Multiple Update Detection Support: N/A 00:12:17.296 Firmware Update Granularity: No Information Provided 00:12:17.296 Per-Namespace SMART Log: Yes 00:12:17.296 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.296 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:17.296 Command Effects Log Page: Supported 00:12:17.296 Get Log Page Extended Data: Supported 00:12:17.296 Telemetry Log Pages: Not Supported 00:12:17.296 Persistent Event Log Pages: Not Supported 00:12:17.296 Supported Log Pages Log Page: May Support 00:12:17.296 Commands Supported & Effects Log Page: Not Supported 00:12:17.296 Feature Identifiers & Effects Log Page:May Support 00:12:17.296 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.296 Data Area 4 for Telemetry Log: Not Supported 00:12:17.296 Error Log Page Entries Supported: 1 00:12:17.296 Keep Alive: Not Supported 00:12:17.296 00:12:17.296 NVM Command Set Attributes 00:12:17.296 ========================== 00:12:17.296 Submission Queue Entry Size 00:12:17.296 Max: 64 00:12:17.296 Min: 64 00:12:17.296 Completion Queue Entry Size 00:12:17.296 Max: 16 00:12:17.296 Min: 16 00:12:17.296 Number of Namespaces: 256 00:12:17.296 Compare Command: Supported 00:12:17.296 Write Uncorrectable Command: Not Supported 00:12:17.296 Dataset Management Command: Supported 00:12:17.296 Write Zeroes Command: Supported 00:12:17.296 Set Features Save Field: Supported 00:12:17.296 Reservations: Not Supported 00:12:17.296 Timestamp: Supported 00:12:17.296 Copy: Supported 00:12:17.296 Volatile Write Cache: Present 00:12:17.296 Atomic Write Unit (Normal): 1 00:12:17.296 Atomic Write Unit (PFail): 1 00:12:17.296 Atomic Compare & Write Unit: 1 00:12:17.296 Fused Compare & Write: Not Supported 00:12:17.296 Scatter-Gather List 00:12:17.296 SGL Command Set: Supported 00:12:17.296 SGL Keyed: Not Supported 00:12:17.296 SGL Bit Bucket Descriptor: Not Supported 00:12:17.296 SGL Metadata Pointer: Not Supported 00:12:17.296 Oversized SGL: Not Supported 00:12:17.296 SGL Metadata Address: Not Supported 00:12:17.296 SGL Offset: Not Supported 00:12:17.296 Transport SGL Data Block: Not Supported 00:12:17.296 Replay Protected Memory Block: Not Supported 00:12:17.296 00:12:17.296 Firmware Slot Information 00:12:17.296 ========================= 00:12:17.296 Active slot: 1 00:12:17.296 Slot 1 Firmware Revision: 1.0 00:12:17.296 00:12:17.296 00:12:17.296 Commands Supported and Effects 00:12:17.296 ============================== 00:12:17.296 Admin Commands 00:12:17.296 -------------- 00:12:17.296 Delete I/O Submission Queue (00h): Supported 00:12:17.296 Create I/O Submission Queue (01h): Supported 00:12:17.296 Get Log Page (02h): Supported 00:12:17.296 Delete I/O Completion Queue (04h): Supported 00:12:17.296 Create I/O Completion Queue (05h): Supported 00:12:17.296 Identify (06h): Supported 00:12:17.296 Abort (08h): Supported 00:12:17.296 Set Features (09h): Supported 00:12:17.296 Get Features (0Ah): Supported 00:12:17.296 Asynchronous Event Request (0Ch): Supported 00:12:17.296 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.296 Directive Send (19h): Supported 00:12:17.296 Directive Receive (1Ah): Supported 00:12:17.296 Virtualization Management (1Ch): Supported 00:12:17.296 Doorbell Buffer Config (7Ch): Supported 00:12:17.296 Format NVM (80h): Supported LBA-Change 00:12:17.296 I/O Commands 00:12:17.296 ------------ 00:12:17.296 Flush (00h): Supported LBA-Change 00:12:17.296 Write (01h): Supported LBA-Change 00:12:17.296 Read (02h): Supported 00:12:17.296 Compare (05h): Supported 00:12:17.296 Write Zeroes (08h): Supported LBA-Change 00:12:17.296 Dataset Management (09h): Supported LBA-Change 00:12:17.296 Unknown (0Ch): Supported 00:12:17.296 Unknown (12h): Supported 00:12:17.296 Copy (19h): Supported LBA-Change 00:12:17.296 Unknown (1Dh): Supported LBA-Change 00:12:17.296 00:12:17.296 Error Log 00:12:17.296 ========= 00:12:17.296 00:12:17.296 Arbitration 00:12:17.296 =========== 00:12:17.296 Arbitration Burst: no limit 00:12:17.296 00:12:17.296 Power Management 00:12:17.296 ================ 00:12:17.296 Number of Power States: 1 00:12:17.296 Current Power State: Power State #0 00:12:17.296 Power State #0: 00:12:17.296 Max Power: 25.00 W 00:12:17.297 Non-Operational State: Operational 00:12:17.297 Entry Latency: 16 microseconds 00:12:17.297 Exit Latency: 4 microseconds 00:12:17.297 Relative Read Throughput: 0 00:12:17.297 Relative Read Latency: 0 00:12:17.297 Relative Write Throughput: 0 00:12:17.297 Relative Write Latency: 0 00:12:17.297 Idle Power: Not Reported 00:12:17.297 Active Power: Not Reported 00:12:17.297 Non-Operational Permissive Mode: Not Supported 00:12:17.297 00:12:17.297 Health Information 00:12:17.297 ================== 00:12:17.297 Critical Warnings: 00:12:17.297 Available Spare Space: OK 00:12:17.297 Temperature: OK 00:12:17.297 Device Reliability: OK 00:12:17.297 Read Only: No 00:12:17.297 Volatile Memory Backup: OK 00:12:17.297 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.297 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.297 Available Spare: 0% 00:12:17.297 Available Spare Threshold: 0% 00:12:17.297 Life Percentage Used: 0% 00:12:17.297 Data Units Read: 876 00:12:17.297 Data Units Written: 805 00:12:17.297 Host Read Commands: 36681 00:12:17.297 Host Write Commands: 36104 00:12:17.297 Controller Busy Time: 0 minutes 00:12:17.297 Power Cycles: 0 00:12:17.297 Power On Hours: 0 hours 00:12:17.297 Unsafe Shutdowns: 0 00:12:17.297 Unrecoverable Media Errors: 0 00:12:17.297 Lifetime Error Log Entries: 0 00:12:17.297 Warning Temperature Time: 0 minutes 00:12:17.297 Critical Temperature Time: 0 minutes 00:12:17.297 00:12:17.297 Number of Queues 00:12:17.297 ================ 00:12:17.297 Number of I/O Submission Queues: 64 00:12:17.297 Number of I/O Completion Queues: 64 00:12:17.297 00:12:17.297 ZNS Specific Controller Data 00:12:17.297 ============================ 00:12:17.297 Zone Append Size Limit: 0 00:12:17.297 00:12:17.297 00:12:17.297 Active Namespaces 00:12:17.297 ================= 00:12:17.297 Namespace ID:1 00:12:17.297 Error Recovery Timeout: Unlimited 00:12:17.297 Command Set Identifier: NVM (00h) 00:12:17.297 Deallocate: Supported 00:12:17.297 Deallocated/Unwritten Error: Supported 00:12:17.297 Deallocated Read Value: All 0x00 00:12:17.297 Deallocate in Write Zeroes: Not Supported 00:12:17.297 Deallocated Guard Field: 0xFFFF 00:12:17.297 Flush: Supported 00:12:17.297 Reservation: Not Supported 00:12:17.297 Namespace Sharing Capabilities: Multiple Controllers 00:12:17.297 Size (in LBAs): 262144 (1GiB) 00:12:17.297 Capacity (in LBAs): 262144 (1GiB) 00:12:17.297 Utilization (in LBAs): 262144 (1GiB) 00:12:17.297 Thin Provisioning: Not Supported 00:12:17.297 Per-NS Atomic Units: No 00:12:17.297 Maximum Single Source Range Length: 128 00:12:17.297 Maximum Copy Length: 128 00:12:17.297 Maximum Source Range Count: 128 00:12:17.297 NGUID/EUI64 Never Reused: No 00:12:17.297 Namespace Write Protected: No 00:12:17.297 Endurance group ID: 1 00:12:17.297 Number of LBA Formats: 8 00:12:17.297 Current LBA Format: LBA Format #04 00:12:17.297 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.297 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.297 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.297 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.297 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.297 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.297 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.297 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.297 00:12:17.297 Get Feature FDP: 00:12:17.297 ================ 00:12:17.297 Enabled: Yes 00:12:17.297 FDP configuration index: 0 00:12:17.297 00:12:17.297 FDP configurations log page 00:12:17.297 =========================== 00:12:17.297 Number of FDP configurations: 1 00:12:17.297 Version: 0 00:12:17.297 Size: 112 00:12:17.297 FDP Configuration Descriptor: 0 00:12:17.297 Descriptor Size: 96 00:12:17.297 Reclaim Group Identifier format: 2 00:12:17.297 FDP Volatile Write Cache: Not Present 00:12:17.297 FDP Configuration: Valid 00:12:17.297 Vendor Specific Size: 0 00:12:17.297 Number of Reclaim Groups: 2 00:12:17.297 Number of Recalim Unit Handles: 8 00:12:17.297 Max Placement Identifiers: 128 00:12:17.297 Number of Namespaces Suppprted: 256 00:12:17.297 Reclaim unit Nominal Size: 6000000 bytes 00:12:17.297 Estimated Reclaim Unit Time Limit: Not Reported 00:12:17.297 RUH Desc #000: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #001: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #002: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #003: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #004: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #005: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #006: RUH Type: Initially Isolated 00:12:17.297 RUH Desc #007: RUH Type: Initially Isolated 00:12:17.297 00:12:17.297 FDP reclaim unit handle usage log page 00:12:17.297 ====================================== 00:12:17.297 Number of Reclaim Unit Handles: 8 00:12:17.297 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:17.297 RUH Usage Desc #001: RUH Attributes: Unused 00:12:17.297 RUH Usage Desc #002: RUH Attributes: Unused 00:12:17.297 RUH Usage Desc #003: RUH Attributes: Unused 00:12:17.298 RUH Usage Desc #004: RUH Attributes: Unused 00:12:17.298 RUH Usage Desc #005: RUH Attributes: Unused 00:12:17.298 RUH Usage Desc #006: RUH Attributes: Unused 00:12:17.298 RUH Usage Desc #007: RUH Attributes: Unused 00:12:17.298 00:12:17.298 FDP statistics log page 00:12:17.298 ======================= 00:12:17.298 Host bytes with metadata written: 524394496 00:12:17.298 Med[2024-11-25 20:30:25.392366] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64348 terminated unexpected 00:12:17.298 ia bytes with metadata written: 524451840 00:12:17.298 Media bytes erased: 0 00:12:17.298 00:12:17.298 FDP events log page 00:12:17.298 =================== 00:12:17.298 Number of FDP events: 0 00:12:17.298 00:12:17.298 NVM Specific Namespace Data 00:12:17.298 =========================== 00:12:17.298 Logical Block Storage Tag Mask: 0 00:12:17.298 Protection Information Capabilities: 00:12:17.298 16b Guard Protection Information Storage Tag Support: No 00:12:17.298 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.298 Storage Tag Check Read Support: No 00:12:17.298 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.298 ===================================================== 00:12:17.298 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:17.298 ===================================================== 00:12:17.298 Controller Capabilities/Features 00:12:17.298 ================================ 00:12:17.298 Vendor ID: 1b36 00:12:17.298 Subsystem Vendor ID: 1af4 00:12:17.298 Serial Number: 12342 00:12:17.298 Model Number: QEMU NVMe Ctrl 00:12:17.298 Firmware Version: 8.0.0 00:12:17.298 Recommended Arb Burst: 6 00:12:17.298 IEEE OUI Identifier: 00 54 52 00:12:17.298 Multi-path I/O 00:12:17.298 May have multiple subsystem ports: No 00:12:17.298 May have multiple controllers: No 00:12:17.298 Associated with SR-IOV VF: No 00:12:17.298 Max Data Transfer Size: 524288 00:12:17.298 Max Number of Namespaces: 256 00:12:17.298 Max Number of I/O Queues: 64 00:12:17.298 NVMe Specification Version (VS): 1.4 00:12:17.298 NVMe Specification Version (Identify): 1.4 00:12:17.298 Maximum Queue Entries: 2048 00:12:17.298 Contiguous Queues Required: Yes 00:12:17.298 Arbitration Mechanisms Supported 00:12:17.298 Weighted Round Robin: Not Supported 00:12:17.298 Vendor Specific: Not Supported 00:12:17.298 Reset Timeout: 7500 ms 00:12:17.298 Doorbell Stride: 4 bytes 00:12:17.298 NVM Subsystem Reset: Not Supported 00:12:17.298 Command Sets Supported 00:12:17.298 NVM Command Set: Supported 00:12:17.298 Boot Partition: Not Supported 00:12:17.298 Memory Page Size Minimum: 4096 bytes 00:12:17.298 Memory Page Size Maximum: 65536 bytes 00:12:17.298 Persistent Memory Region: Not Supported 00:12:17.298 Optional Asynchronous Events Supported 00:12:17.298 Namespace Attribute Notices: Supported 00:12:17.298 Firmware Activation Notices: Not Supported 00:12:17.298 ANA Change Notices: Not Supported 00:12:17.298 PLE Aggregate Log Change Notices: Not Supported 00:12:17.298 LBA Status Info Alert Notices: Not Supported 00:12:17.298 EGE Aggregate Log Change Notices: Not Supported 00:12:17.298 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.298 Zone Descriptor Change Notices: Not Supported 00:12:17.298 Discovery Log Change Notices: Not Supported 00:12:17.298 Controller Attributes 00:12:17.298 128-bit Host Identifier: Not Supported 00:12:17.298 Non-Operational Permissive Mode: Not Supported 00:12:17.298 NVM Sets: Not Supported 00:12:17.298 Read Recovery Levels: Not Supported 00:12:17.298 Endurance Groups: Not Supported 00:12:17.298 Predictable Latency Mode: Not Supported 00:12:17.298 Traffic Based Keep ALive: Not Supported 00:12:17.298 Namespace Granularity: Not Supported 00:12:17.298 SQ Associations: Not Supported 00:12:17.298 UUID List: Not Supported 00:12:17.298 Multi-Domain Subsystem: Not Supported 00:12:17.299 Fixed Capacity Management: Not Supported 00:12:17.299 Variable Capacity Management: Not Supported 00:12:17.299 Delete Endurance Group: Not Supported 00:12:17.299 Delete NVM Set: Not Supported 00:12:17.299 Extended LBA Formats Supported: Supported 00:12:17.299 Flexible Data Placement Supported: Not Supported 00:12:17.299 00:12:17.299 Controller Memory Buffer Support 00:12:17.299 ================================ 00:12:17.299 Supported: No 00:12:17.299 00:12:17.299 Persistent Memory Region Support 00:12:17.299 ================================ 00:12:17.299 Supported: No 00:12:17.299 00:12:17.299 Admin Command Set Attributes 00:12:17.299 ============================ 00:12:17.299 Security Send/Receive: Not Supported 00:12:17.299 Format NVM: Supported 00:12:17.299 Firmware Activate/Download: Not Supported 00:12:17.299 Namespace Management: Supported 00:12:17.299 Device Self-Test: Not Supported 00:12:17.299 Directives: Supported 00:12:17.299 NVMe-MI: Not Supported 00:12:17.299 Virtualization Management: Not Supported 00:12:17.299 Doorbell Buffer Config: Supported 00:12:17.299 Get LBA Status Capability: Not Supported 00:12:17.299 Command & Feature Lockdown Capability: Not Supported 00:12:17.299 Abort Command Limit: 4 00:12:17.299 Async Event Request Limit: 4 00:12:17.299 Number of Firmware Slots: N/A 00:12:17.299 Firmware Slot 1 Read-Only: N/A 00:12:17.299 Firmware Activation Without Reset: N/A 00:12:17.299 Multiple Update Detection Support: N/A 00:12:17.299 Firmware Update Granularity: No Information Provided 00:12:17.299 Per-Namespace SMART Log: Yes 00:12:17.299 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.299 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:17.299 Command Effects Log Page: Supported 00:12:17.299 Get Log Page Extended Data: Supported 00:12:17.299 Telemetry Log Pages: Not Supported 00:12:17.299 Persistent Event Log Pages: Not Supported 00:12:17.299 Supported Log Pages Log Page: May Support 00:12:17.299 Commands Supported & Effects Log Page: Not Supported 00:12:17.299 Feature Identifiers & Effects Log Page:May Support 00:12:17.299 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.299 Data Area 4 for Telemetry Log: Not Supported 00:12:17.299 Error Log Page Entries Supported: 1 00:12:17.299 Keep Alive: Not Supported 00:12:17.299 00:12:17.299 NVM Command Set Attributes 00:12:17.299 ========================== 00:12:17.299 Submission Queue Entry Size 00:12:17.299 Max: 64 00:12:17.299 Min: 64 00:12:17.299 Completion Queue Entry Size 00:12:17.299 Max: 16 00:12:17.299 Min: 16 00:12:17.299 Number of Namespaces: 256 00:12:17.299 Compare Command: Supported 00:12:17.299 Write Uncorrectable Command: Not Supported 00:12:17.299 Dataset Management Command: Supported 00:12:17.299 Write Zeroes Command: Supported 00:12:17.299 Set Features Save Field: Supported 00:12:17.299 Reservations: Not Supported 00:12:17.299 Timestamp: Supported 00:12:17.299 Copy: Supported 00:12:17.299 Volatile Write Cache: Present 00:12:17.299 Atomic Write Unit (Normal): 1 00:12:17.299 Atomic Write Unit (PFail): 1 00:12:17.299 Atomic Compare & Write Unit: 1 00:12:17.299 Fused Compare & Write: Not Supported 00:12:17.299 Scatter-Gather List 00:12:17.299 SGL Command Set: Supported 00:12:17.299 SGL Keyed: Not Supported 00:12:17.299 SGL Bit Bucket Descriptor: Not Supported 00:12:17.299 SGL Metadata Pointer: Not Supported 00:12:17.299 Oversized SGL: Not Supported 00:12:17.299 SGL Metadata Address: Not Supported 00:12:17.299 SGL Offset: Not Supported 00:12:17.299 Transport SGL Data Block: Not Supported 00:12:17.299 Replay Protected Memory Block: Not Supported 00:12:17.299 00:12:17.299 Firmware Slot Information 00:12:17.299 ========================= 00:12:17.299 Active slot: 1 00:12:17.299 Slot 1 Firmware Revision: 1.0 00:12:17.299 00:12:17.299 00:12:17.299 Commands Supported and Effects 00:12:17.299 ============================== 00:12:17.299 Admin Commands 00:12:17.299 -------------- 00:12:17.299 Delete I/O Submission Queue (00h): Supported 00:12:17.299 Create I/O Submission Queue (01h): Supported 00:12:17.299 Get Log Page (02h): Supported 00:12:17.299 Delete I/O Completion Queue (04h): Supported 00:12:17.299 Create I/O Completion Queue (05h): Supported 00:12:17.299 Identify (06h): Supported 00:12:17.299 Abort (08h): Supported 00:12:17.299 Set Features (09h): Supported 00:12:17.299 Get Features (0Ah): Supported 00:12:17.299 Asynchronous Event Request (0Ch): Supported 00:12:17.299 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.299 Directive Send (19h): Supported 00:12:17.299 Directive Receive (1Ah): Supported 00:12:17.299 Virtualization Management (1Ch): Supported 00:12:17.299 Doorbell Buffer Config (7Ch): Supported 00:12:17.299 Format NVM (80h): Supported LBA-Change 00:12:17.299 I/O Commands 00:12:17.299 ------------ 00:12:17.299 Flush (00h): Supported LBA-Change 00:12:17.299 Write (01h): Supported LBA-Change 00:12:17.299 Read (02h): Supported 00:12:17.299 Compare (05h): Supported 00:12:17.299 Write Zeroes (08h): Supported LBA-Change 00:12:17.299 Dataset Management (09h): Supported LBA-Change 00:12:17.299 Unknown (0Ch): Supported 00:12:17.299 Unknown (12h): Supported 00:12:17.299 Copy (19h): Supported LBA-Change 00:12:17.299 Unknown (1Dh): Supported LBA-Change 00:12:17.299 00:12:17.299 Error Log 00:12:17.300 ========= 00:12:17.300 00:12:17.300 Arbitration 00:12:17.300 =========== 00:12:17.300 Arbitration Burst: no limit 00:12:17.300 00:12:17.300 Power Management 00:12:17.300 ================ 00:12:17.300 Number of Power States: 1 00:12:17.300 Current Power State: Power State #0 00:12:17.300 Power State #0: 00:12:17.300 Max Power: 25.00 W 00:12:17.300 Non-Operational State: Operational 00:12:17.300 Entry Latency: 16 microseconds 00:12:17.300 Exit Latency: 4 microseconds 00:12:17.300 Relative Read Throughput: 0 00:12:17.300 Relative Read Latency: 0 00:12:17.300 Relative Write Throughput: 0 00:12:17.300 Relative Write Latency: 0 00:12:17.300 Idle Power: Not Reported 00:12:17.300 Active Power: Not Reported 00:12:17.300 Non-Operational Permissive Mode: Not Supported 00:12:17.300 00:12:17.300 Health Information 00:12:17.300 ================== 00:12:17.300 Critical Warnings: 00:12:17.300 Available Spare Space: OK 00:12:17.300 Temperature: OK 00:12:17.300 Device Reliability: OK 00:12:17.300 Read Only: No 00:12:17.300 Volatile Memory Backup: OK 00:12:17.300 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.300 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.300 Available Spare: 0% 00:12:17.300 Available Spare Threshold: 0% 00:12:17.300 Life Percentage Used: 0% 00:12:17.300 Data Units Read: 2377 00:12:17.300 Data Units Written: 2164 00:12:17.300 Host Read Commands: 107489 00:12:17.300 Host Write Commands: 105758 00:12:17.300 Controller Busy Time: 0 minutes 00:12:17.300 Power Cycles: 0 00:12:17.300 Power On Hours: 0 hours 00:12:17.300 Unsafe Shutdowns: 0 00:12:17.300 Unrecoverable Media Errors: 0 00:12:17.300 Lifetime Error Log Entries: 0 00:12:17.300 Warning Temperature Time: 0 minutes 00:12:17.300 Critical Temperature Time: 0 minutes 00:12:17.300 00:12:17.300 Number of Queues 00:12:17.300 ================ 00:12:17.300 Number of I/O Submission Queues: 64 00:12:17.300 Number of I/O Completion Queues: 64 00:12:17.300 00:12:17.300 ZNS Specific Controller Data 00:12:17.300 ============================ 00:12:17.300 Zone Append Size Limit: 0 00:12:17.300 00:12:17.300 00:12:17.300 Active Namespaces 00:12:17.300 ================= 00:12:17.300 Namespace ID:1 00:12:17.300 Error Recovery Timeout: Unlimited 00:12:17.300 Command Set Identifier: NVM (00h) 00:12:17.300 Deallocate: Supported 00:12:17.300 Deallocated/Unwritten Error: Supported 00:12:17.300 Deallocated Read Value: All 0x00 00:12:17.300 Deallocate in Write Zeroes: Not Supported 00:12:17.300 Deallocated Guard Field: 0xFFFF 00:12:17.300 Flush: Supported 00:12:17.300 Reservation: Not Supported 00:12:17.300 Namespace Sharing Capabilities: Private 00:12:17.300 Size (in LBAs): 1048576 (4GiB) 00:12:17.300 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.300 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.300 Thin Provisioning: Not Supported 00:12:17.300 Per-NS Atomic Units: No 00:12:17.300 Maximum Single Source Range Length: 128 00:12:17.300 Maximum Copy Length: 128 00:12:17.300 Maximum Source Range Count: 128 00:12:17.300 NGUID/EUI64 Never Reused: No 00:12:17.300 Namespace Write Protected: No 00:12:17.300 Number of LBA Formats: 8 00:12:17.300 Current LBA Format: LBA Format #04 00:12:17.300 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.300 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.300 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.300 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.300 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.300 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.300 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.300 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.300 00:12:17.300 NVM Specific Namespace Data 00:12:17.300 =========================== 00:12:17.300 Logical Block Storage Tag Mask: 0 00:12:17.300 Protection Information Capabilities: 00:12:17.300 16b Guard Protection Information Storage Tag Support: No 00:12:17.300 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.300 Storage Tag Check Read Support: No 00:12:17.300 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Namespace ID:2 00:12:17.300 Error Recovery Timeout: Unlimited 00:12:17.300 Command Set Identifier: NVM (00h) 00:12:17.300 Deallocate: Supported 00:12:17.300 Deallocated/Unwritten Error: Supported 00:12:17.300 Deallocated Read Value: All 0x00 00:12:17.300 Deallocate in Write Zeroes: Not Supported 00:12:17.300 Deallocated Guard Field: 0xFFFF 00:12:17.300 Flush: Supported 00:12:17.300 Reservation: Not Supported 00:12:17.300 Namespace Sharing Capabilities: Private 00:12:17.300 Size (in LBAs): 1048576 (4GiB) 00:12:17.300 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.300 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.300 Thin Provisioning: Not Supported 00:12:17.300 Per-NS Atomic Units: No 00:12:17.300 Maximum Single Source Range Length: 128 00:12:17.300 Maximum Copy Length: 128 00:12:17.300 Maximum Source Range Count: 128 00:12:17.300 NGUID/EUI64 Never Reused: No 00:12:17.300 Namespace Write Protected: No 00:12:17.300 Number of LBA Formats: 8 00:12:17.300 Current LBA Format: LBA Format #04 00:12:17.300 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.300 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.300 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.300 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.300 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.300 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.300 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.300 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.300 00:12:17.300 NVM Specific Namespace Data 00:12:17.300 =========================== 00:12:17.300 Logical Block Storage Tag Mask: 0 00:12:17.300 Protection Information Capabilities: 00:12:17.300 16b Guard Protection Information Storage Tag Support: No 00:12:17.300 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.300 Storage Tag Check Read Support: No 00:12:17.300 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.300 Namespace ID:3 00:12:17.300 Error Recovery Timeout: Unlimited 00:12:17.300 Command Set Identifier: NVM (00h) 00:12:17.300 Deallocate: Supported 00:12:17.300 Deallocated/Unwritten Error: Supported 00:12:17.300 Deallocated Read Value: All 0x00 00:12:17.300 Deallocate in Write Zeroes: Not Supported 00:12:17.300 Deallocated Guard Field: 0xFFFF 00:12:17.300 Flush: Supported 00:12:17.300 Reservation: Not Supported 00:12:17.300 Namespace Sharing Capabilities: Private 00:12:17.301 Size (in LBAs): 1048576 (4GiB) 00:12:17.560 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.560 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.560 Thin Provisioning: Not Supported 00:12:17.560 Per-NS Atomic Units: No 00:12:17.560 Maximum Single Source Range Length: 128 00:12:17.560 Maximum Copy Length: 128 00:12:17.560 Maximum Source Range Count: 128 00:12:17.560 NGUID/EUI64 Never Reused: No 00:12:17.560 Namespace Write Protected: No 00:12:17.560 Number of LBA Formats: 8 00:12:17.560 Current LBA Format: LBA Format #04 00:12:17.560 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.560 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.560 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.560 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.560 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.560 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.560 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.560 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.560 00:12:17.560 NVM Specific Namespace Data 00:12:17.560 =========================== 00:12:17.560 Logical Block Storage Tag Mask: 0 00:12:17.560 Protection Information Capabilities: 00:12:17.560 16b Guard Protection Information Storage Tag Support: No 00:12:17.560 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.560 Storage Tag Check Read Support: No 00:12:17.560 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.560 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:17.560 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:17.822 ===================================================== 00:12:17.822 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:17.822 ===================================================== 00:12:17.822 Controller Capabilities/Features 00:12:17.822 ================================ 00:12:17.822 Vendor ID: 1b36 00:12:17.822 Subsystem Vendor ID: 1af4 00:12:17.822 Serial Number: 12340 00:12:17.822 Model Number: QEMU NVMe Ctrl 00:12:17.822 Firmware Version: 8.0.0 00:12:17.822 Recommended Arb Burst: 6 00:12:17.822 IEEE OUI Identifier: 00 54 52 00:12:17.822 Multi-path I/O 00:12:17.822 May have multiple subsystem ports: No 00:12:17.822 May have multiple controllers: No 00:12:17.822 Associated with SR-IOV VF: No 00:12:17.822 Max Data Transfer Size: 524288 00:12:17.822 Max Number of Namespaces: 256 00:12:17.822 Max Number of I/O Queues: 64 00:12:17.822 NVMe Specification Version (VS): 1.4 00:12:17.822 NVMe Specification Version (Identify): 1.4 00:12:17.822 Maximum Queue Entries: 2048 00:12:17.822 Contiguous Queues Required: Yes 00:12:17.822 Arbitration Mechanisms Supported 00:12:17.822 Weighted Round Robin: Not Supported 00:12:17.822 Vendor Specific: Not Supported 00:12:17.822 Reset Timeout: 7500 ms 00:12:17.822 Doorbell Stride: 4 bytes 00:12:17.822 NVM Subsystem Reset: Not Supported 00:12:17.822 Command Sets Supported 00:12:17.822 NVM Command Set: Supported 00:12:17.822 Boot Partition: Not Supported 00:12:17.822 Memory Page Size Minimum: 4096 bytes 00:12:17.822 Memory Page Size Maximum: 65536 bytes 00:12:17.822 Persistent Memory Region: Not Supported 00:12:17.822 Optional Asynchronous Events Supported 00:12:17.822 Namespace Attribute Notices: Supported 00:12:17.822 Firmware Activation Notices: Not Supported 00:12:17.822 ANA Change Notices: Not Supported 00:12:17.822 PLE Aggregate Log Change Notices: Not Supported 00:12:17.822 LBA Status Info Alert Notices: Not Supported 00:12:17.822 EGE Aggregate Log Change Notices: Not Supported 00:12:17.822 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.822 Zone Descriptor Change Notices: Not Supported 00:12:17.822 Discovery Log Change Notices: Not Supported 00:12:17.822 Controller Attributes 00:12:17.822 128-bit Host Identifier: Not Supported 00:12:17.822 Non-Operational Permissive Mode: Not Supported 00:12:17.822 NVM Sets: Not Supported 00:12:17.822 Read Recovery Levels: Not Supported 00:12:17.822 Endurance Groups: Not Supported 00:12:17.822 Predictable Latency Mode: Not Supported 00:12:17.822 Traffic Based Keep ALive: Not Supported 00:12:17.822 Namespace Granularity: Not Supported 00:12:17.822 SQ Associations: Not Supported 00:12:17.822 UUID List: Not Supported 00:12:17.822 Multi-Domain Subsystem: Not Supported 00:12:17.822 Fixed Capacity Management: Not Supported 00:12:17.822 Variable Capacity Management: Not Supported 00:12:17.822 Delete Endurance Group: Not Supported 00:12:17.822 Delete NVM Set: Not Supported 00:12:17.822 Extended LBA Formats Supported: Supported 00:12:17.822 Flexible Data Placement Supported: Not Supported 00:12:17.822 00:12:17.822 Controller Memory Buffer Support 00:12:17.822 ================================ 00:12:17.822 Supported: No 00:12:17.822 00:12:17.822 Persistent Memory Region Support 00:12:17.822 ================================ 00:12:17.822 Supported: No 00:12:17.822 00:12:17.822 Admin Command Set Attributes 00:12:17.822 ============================ 00:12:17.822 Security Send/Receive: Not Supported 00:12:17.822 Format NVM: Supported 00:12:17.822 Firmware Activate/Download: Not Supported 00:12:17.822 Namespace Management: Supported 00:12:17.822 Device Self-Test: Not Supported 00:12:17.822 Directives: Supported 00:12:17.822 NVMe-MI: Not Supported 00:12:17.822 Virtualization Management: Not Supported 00:12:17.822 Doorbell Buffer Config: Supported 00:12:17.822 Get LBA Status Capability: Not Supported 00:12:17.822 Command & Feature Lockdown Capability: Not Supported 00:12:17.822 Abort Command Limit: 4 00:12:17.822 Async Event Request Limit: 4 00:12:17.822 Number of Firmware Slots: N/A 00:12:17.822 Firmware Slot 1 Read-Only: N/A 00:12:17.822 Firmware Activation Without Reset: N/A 00:12:17.822 Multiple Update Detection Support: N/A 00:12:17.822 Firmware Update Granularity: No Information Provided 00:12:17.822 Per-Namespace SMART Log: Yes 00:12:17.822 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.822 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:17.822 Command Effects Log Page: Supported 00:12:17.822 Get Log Page Extended Data: Supported 00:12:17.822 Telemetry Log Pages: Not Supported 00:12:17.822 Persistent Event Log Pages: Not Supported 00:12:17.822 Supported Log Pages Log Page: May Support 00:12:17.822 Commands Supported & Effects Log Page: Not Supported 00:12:17.822 Feature Identifiers & Effects Log Page:May Support 00:12:17.822 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.822 Data Area 4 for Telemetry Log: Not Supported 00:12:17.822 Error Log Page Entries Supported: 1 00:12:17.822 Keep Alive: Not Supported 00:12:17.822 00:12:17.822 NVM Command Set Attributes 00:12:17.822 ========================== 00:12:17.822 Submission Queue Entry Size 00:12:17.822 Max: 64 00:12:17.822 Min: 64 00:12:17.822 Completion Queue Entry Size 00:12:17.822 Max: 16 00:12:17.822 Min: 16 00:12:17.822 Number of Namespaces: 256 00:12:17.822 Compare Command: Supported 00:12:17.822 Write Uncorrectable Command: Not Supported 00:12:17.822 Dataset Management Command: Supported 00:12:17.822 Write Zeroes Command: Supported 00:12:17.822 Set Features Save Field: Supported 00:12:17.822 Reservations: Not Supported 00:12:17.822 Timestamp: Supported 00:12:17.822 Copy: Supported 00:12:17.822 Volatile Write Cache: Present 00:12:17.822 Atomic Write Unit (Normal): 1 00:12:17.822 Atomic Write Unit (PFail): 1 00:12:17.822 Atomic Compare & Write Unit: 1 00:12:17.822 Fused Compare & Write: Not Supported 00:12:17.822 Scatter-Gather List 00:12:17.822 SGL Command Set: Supported 00:12:17.822 SGL Keyed: Not Supported 00:12:17.822 SGL Bit Bucket Descriptor: Not Supported 00:12:17.822 SGL Metadata Pointer: Not Supported 00:12:17.822 Oversized SGL: Not Supported 00:12:17.822 SGL Metadata Address: Not Supported 00:12:17.822 SGL Offset: Not Supported 00:12:17.822 Transport SGL Data Block: Not Supported 00:12:17.822 Replay Protected Memory Block: Not Supported 00:12:17.822 00:12:17.822 Firmware Slot Information 00:12:17.822 ========================= 00:12:17.822 Active slot: 1 00:12:17.822 Slot 1 Firmware Revision: 1.0 00:12:17.822 00:12:17.822 00:12:17.822 Commands Supported and Effects 00:12:17.822 ============================== 00:12:17.822 Admin Commands 00:12:17.822 -------------- 00:12:17.822 Delete I/O Submission Queue (00h): Supported 00:12:17.822 Create I/O Submission Queue (01h): Supported 00:12:17.822 Get Log Page (02h): Supported 00:12:17.822 Delete I/O Completion Queue (04h): Supported 00:12:17.822 Create I/O Completion Queue (05h): Supported 00:12:17.822 Identify (06h): Supported 00:12:17.822 Abort (08h): Supported 00:12:17.822 Set Features (09h): Supported 00:12:17.822 Get Features (0Ah): Supported 00:12:17.822 Asynchronous Event Request (0Ch): Supported 00:12:17.822 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.822 Directive Send (19h): Supported 00:12:17.822 Directive Receive (1Ah): Supported 00:12:17.822 Virtualization Management (1Ch): Supported 00:12:17.822 Doorbell Buffer Config (7Ch): Supported 00:12:17.822 Format NVM (80h): Supported LBA-Change 00:12:17.822 I/O Commands 00:12:17.822 ------------ 00:12:17.823 Flush (00h): Supported LBA-Change 00:12:17.823 Write (01h): Supported LBA-Change 00:12:17.823 Read (02h): Supported 00:12:17.823 Compare (05h): Supported 00:12:17.823 Write Zeroes (08h): Supported LBA-Change 00:12:17.823 Dataset Management (09h): Supported LBA-Change 00:12:17.823 Unknown (0Ch): Supported 00:12:17.823 Unknown (12h): Supported 00:12:17.823 Copy (19h): Supported LBA-Change 00:12:17.823 Unknown (1Dh): Supported LBA-Change 00:12:17.823 00:12:17.823 Error Log 00:12:17.823 ========= 00:12:17.823 00:12:17.823 Arbitration 00:12:17.823 =========== 00:12:17.823 Arbitration Burst: no limit 00:12:17.823 00:12:17.823 Power Management 00:12:17.823 ================ 00:12:17.823 Number of Power States: 1 00:12:17.823 Current Power State: Power State #0 00:12:17.823 Power State #0: 00:12:17.823 Max Power: 25.00 W 00:12:17.823 Non-Operational State: Operational 00:12:17.823 Entry Latency: 16 microseconds 00:12:17.823 Exit Latency: 4 microseconds 00:12:17.823 Relative Read Throughput: 0 00:12:17.823 Relative Read Latency: 0 00:12:17.823 Relative Write Throughput: 0 00:12:17.823 Relative Write Latency: 0 00:12:17.823 Idle Power: Not Reported 00:12:17.823 Active Power: Not Reported 00:12:17.823 Non-Operational Permissive Mode: Not Supported 00:12:17.823 00:12:17.823 Health Information 00:12:17.823 ================== 00:12:17.823 Critical Warnings: 00:12:17.823 Available Spare Space: OK 00:12:17.823 Temperature: OK 00:12:17.823 Device Reliability: OK 00:12:17.823 Read Only: No 00:12:17.823 Volatile Memory Backup: OK 00:12:17.823 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.823 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.823 Available Spare: 0% 00:12:17.823 Available Spare Threshold: 0% 00:12:17.823 Life Percentage Used: 0% 00:12:17.823 Data Units Read: 743 00:12:17.823 Data Units Written: 671 00:12:17.823 Host Read Commands: 35101 00:12:17.823 Host Write Commands: 34887 00:12:17.823 Controller Busy Time: 0 minutes 00:12:17.823 Power Cycles: 0 00:12:17.823 Power On Hours: 0 hours 00:12:17.823 Unsafe Shutdowns: 0 00:12:17.823 Unrecoverable Media Errors: 0 00:12:17.823 Lifetime Error Log Entries: 0 00:12:17.823 Warning Temperature Time: 0 minutes 00:12:17.823 Critical Temperature Time: 0 minutes 00:12:17.823 00:12:17.823 Number of Queues 00:12:17.823 ================ 00:12:17.823 Number of I/O Submission Queues: 64 00:12:17.823 Number of I/O Completion Queues: 64 00:12:17.823 00:12:17.823 ZNS Specific Controller Data 00:12:17.823 ============================ 00:12:17.823 Zone Append Size Limit: 0 00:12:17.823 00:12:17.823 00:12:17.823 Active Namespaces 00:12:17.823 ================= 00:12:17.823 Namespace ID:1 00:12:17.823 Error Recovery Timeout: Unlimited 00:12:17.823 Command Set Identifier: NVM (00h) 00:12:17.823 Deallocate: Supported 00:12:17.823 Deallocated/Unwritten Error: Supported 00:12:17.823 Deallocated Read Value: All 0x00 00:12:17.823 Deallocate in Write Zeroes: Not Supported 00:12:17.823 Deallocated Guard Field: 0xFFFF 00:12:17.823 Flush: Supported 00:12:17.823 Reservation: Not Supported 00:12:17.823 Metadata Transferred as: Separate Metadata Buffer 00:12:17.823 Namespace Sharing Capabilities: Private 00:12:17.823 Size (in LBAs): 1548666 (5GiB) 00:12:17.823 Capacity (in LBAs): 1548666 (5GiB) 00:12:17.823 Utilization (in LBAs): 1548666 (5GiB) 00:12:17.823 Thin Provisioning: Not Supported 00:12:17.823 Per-NS Atomic Units: No 00:12:17.823 Maximum Single Source Range Length: 128 00:12:17.823 Maximum Copy Length: 128 00:12:17.823 Maximum Source Range Count: 128 00:12:17.823 NGUID/EUI64 Never Reused: No 00:12:17.823 Namespace Write Protected: No 00:12:17.823 Number of LBA Formats: 8 00:12:17.823 Current LBA Format: LBA Format #07 00:12:17.823 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.823 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.823 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.823 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.823 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.823 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.823 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.823 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.823 00:12:17.823 NVM Specific Namespace Data 00:12:17.823 =========================== 00:12:17.823 Logical Block Storage Tag Mask: 0 00:12:17.823 Protection Information Capabilities: 00:12:17.823 16b Guard Protection Information Storage Tag Support: No 00:12:17.823 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.823 Storage Tag Check Read Support: No 00:12:17.823 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.823 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:17.823 20:30:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:18.083 ===================================================== 00:12:18.083 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:18.083 ===================================================== 00:12:18.083 Controller Capabilities/Features 00:12:18.083 ================================ 00:12:18.083 Vendor ID: 1b36 00:12:18.083 Subsystem Vendor ID: 1af4 00:12:18.083 Serial Number: 12341 00:12:18.083 Model Number: QEMU NVMe Ctrl 00:12:18.083 Firmware Version: 8.0.0 00:12:18.083 Recommended Arb Burst: 6 00:12:18.083 IEEE OUI Identifier: 00 54 52 00:12:18.083 Multi-path I/O 00:12:18.083 May have multiple subsystem ports: No 00:12:18.083 May have multiple controllers: No 00:12:18.083 Associated with SR-IOV VF: No 00:12:18.083 Max Data Transfer Size: 524288 00:12:18.084 Max Number of Namespaces: 256 00:12:18.084 Max Number of I/O Queues: 64 00:12:18.084 NVMe Specification Version (VS): 1.4 00:12:18.084 NVMe Specification Version (Identify): 1.4 00:12:18.084 Maximum Queue Entries: 2048 00:12:18.084 Contiguous Queues Required: Yes 00:12:18.084 Arbitration Mechanisms Supported 00:12:18.084 Weighted Round Robin: Not Supported 00:12:18.084 Vendor Specific: Not Supported 00:12:18.084 Reset Timeout: 7500 ms 00:12:18.084 Doorbell Stride: 4 bytes 00:12:18.084 NVM Subsystem Reset: Not Supported 00:12:18.084 Command Sets Supported 00:12:18.084 NVM Command Set: Supported 00:12:18.084 Boot Partition: Not Supported 00:12:18.084 Memory Page Size Minimum: 4096 bytes 00:12:18.084 Memory Page Size Maximum: 65536 bytes 00:12:18.084 Persistent Memory Region: Not Supported 00:12:18.084 Optional Asynchronous Events Supported 00:12:18.084 Namespace Attribute Notices: Supported 00:12:18.084 Firmware Activation Notices: Not Supported 00:12:18.084 ANA Change Notices: Not Supported 00:12:18.084 PLE Aggregate Log Change Notices: Not Supported 00:12:18.084 LBA Status Info Alert Notices: Not Supported 00:12:18.084 EGE Aggregate Log Change Notices: Not Supported 00:12:18.084 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.084 Zone Descriptor Change Notices: Not Supported 00:12:18.084 Discovery Log Change Notices: Not Supported 00:12:18.084 Controller Attributes 00:12:18.084 128-bit Host Identifier: Not Supported 00:12:18.084 Non-Operational Permissive Mode: Not Supported 00:12:18.084 NVM Sets: Not Supported 00:12:18.084 Read Recovery Levels: Not Supported 00:12:18.084 Endurance Groups: Not Supported 00:12:18.084 Predictable Latency Mode: Not Supported 00:12:18.084 Traffic Based Keep ALive: Not Supported 00:12:18.084 Namespace Granularity: Not Supported 00:12:18.084 SQ Associations: Not Supported 00:12:18.084 UUID List: Not Supported 00:12:18.084 Multi-Domain Subsystem: Not Supported 00:12:18.084 Fixed Capacity Management: Not Supported 00:12:18.084 Variable Capacity Management: Not Supported 00:12:18.084 Delete Endurance Group: Not Supported 00:12:18.084 Delete NVM Set: Not Supported 00:12:18.084 Extended LBA Formats Supported: Supported 00:12:18.084 Flexible Data Placement Supported: Not Supported 00:12:18.084 00:12:18.084 Controller Memory Buffer Support 00:12:18.084 ================================ 00:12:18.084 Supported: No 00:12:18.084 00:12:18.084 Persistent Memory Region Support 00:12:18.084 ================================ 00:12:18.084 Supported: No 00:12:18.084 00:12:18.084 Admin Command Set Attributes 00:12:18.084 ============================ 00:12:18.084 Security Send/Receive: Not Supported 00:12:18.084 Format NVM: Supported 00:12:18.084 Firmware Activate/Download: Not Supported 00:12:18.084 Namespace Management: Supported 00:12:18.084 Device Self-Test: Not Supported 00:12:18.084 Directives: Supported 00:12:18.084 NVMe-MI: Not Supported 00:12:18.084 Virtualization Management: Not Supported 00:12:18.084 Doorbell Buffer Config: Supported 00:12:18.084 Get LBA Status Capability: Not Supported 00:12:18.084 Command & Feature Lockdown Capability: Not Supported 00:12:18.084 Abort Command Limit: 4 00:12:18.084 Async Event Request Limit: 4 00:12:18.084 Number of Firmware Slots: N/A 00:12:18.084 Firmware Slot 1 Read-Only: N/A 00:12:18.084 Firmware Activation Without Reset: N/A 00:12:18.084 Multiple Update Detection Support: N/A 00:12:18.084 Firmware Update Granularity: No Information Provided 00:12:18.084 Per-Namespace SMART Log: Yes 00:12:18.084 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.084 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:18.084 Command Effects Log Page: Supported 00:12:18.084 Get Log Page Extended Data: Supported 00:12:18.084 Telemetry Log Pages: Not Supported 00:12:18.084 Persistent Event Log Pages: Not Supported 00:12:18.084 Supported Log Pages Log Page: May Support 00:12:18.084 Commands Supported & Effects Log Page: Not Supported 00:12:18.084 Feature Identifiers & Effects Log Page:May Support 00:12:18.084 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.084 Data Area 4 for Telemetry Log: Not Supported 00:12:18.084 Error Log Page Entries Supported: 1 00:12:18.084 Keep Alive: Not Supported 00:12:18.084 00:12:18.084 NVM Command Set Attributes 00:12:18.084 ========================== 00:12:18.084 Submission Queue Entry Size 00:12:18.084 Max: 64 00:12:18.084 Min: 64 00:12:18.084 Completion Queue Entry Size 00:12:18.084 Max: 16 00:12:18.084 Min: 16 00:12:18.084 Number of Namespaces: 256 00:12:18.084 Compare Command: Supported 00:12:18.084 Write Uncorrectable Command: Not Supported 00:12:18.084 Dataset Management Command: Supported 00:12:18.084 Write Zeroes Command: Supported 00:12:18.084 Set Features Save Field: Supported 00:12:18.084 Reservations: Not Supported 00:12:18.084 Timestamp: Supported 00:12:18.084 Copy: Supported 00:12:18.084 Volatile Write Cache: Present 00:12:18.084 Atomic Write Unit (Normal): 1 00:12:18.084 Atomic Write Unit (PFail): 1 00:12:18.084 Atomic Compare & Write Unit: 1 00:12:18.084 Fused Compare & Write: Not Supported 00:12:18.084 Scatter-Gather List 00:12:18.084 SGL Command Set: Supported 00:12:18.084 SGL Keyed: Not Supported 00:12:18.084 SGL Bit Bucket Descriptor: Not Supported 00:12:18.084 SGL Metadata Pointer: Not Supported 00:12:18.084 Oversized SGL: Not Supported 00:12:18.084 SGL Metadata Address: Not Supported 00:12:18.084 SGL Offset: Not Supported 00:12:18.084 Transport SGL Data Block: Not Supported 00:12:18.084 Replay Protected Memory Block: Not Supported 00:12:18.084 00:12:18.084 Firmware Slot Information 00:12:18.084 ========================= 00:12:18.084 Active slot: 1 00:12:18.084 Slot 1 Firmware Revision: 1.0 00:12:18.084 00:12:18.084 00:12:18.084 Commands Supported and Effects 00:12:18.084 ============================== 00:12:18.084 Admin Commands 00:12:18.084 -------------- 00:12:18.084 Delete I/O Submission Queue (00h): Supported 00:12:18.084 Create I/O Submission Queue (01h): Supported 00:12:18.084 Get Log Page (02h): Supported 00:12:18.084 Delete I/O Completion Queue (04h): Supported 00:12:18.084 Create I/O Completion Queue (05h): Supported 00:12:18.084 Identify (06h): Supported 00:12:18.084 Abort (08h): Supported 00:12:18.084 Set Features (09h): Supported 00:12:18.084 Get Features (0Ah): Supported 00:12:18.084 Asynchronous Event Request (0Ch): Supported 00:12:18.084 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:18.084 Directive Send (19h): Supported 00:12:18.084 Directive Receive (1Ah): Supported 00:12:18.084 Virtualization Management (1Ch): Supported 00:12:18.084 Doorbell Buffer Config (7Ch): Supported 00:12:18.084 Format NVM (80h): Supported LBA-Change 00:12:18.084 I/O Commands 00:12:18.084 ------------ 00:12:18.084 Flush (00h): Supported LBA-Change 00:12:18.084 Write (01h): Supported LBA-Change 00:12:18.084 Read (02h): Supported 00:12:18.084 Compare (05h): Supported 00:12:18.084 Write Zeroes (08h): Supported LBA-Change 00:12:18.084 Dataset Management (09h): Supported LBA-Change 00:12:18.084 Unknown (0Ch): Supported 00:12:18.084 Unknown (12h): Supported 00:12:18.084 Copy (19h): Supported LBA-Change 00:12:18.084 Unknown (1Dh): Supported LBA-Change 00:12:18.084 00:12:18.084 Error Log 00:12:18.084 ========= 00:12:18.084 00:12:18.084 Arbitration 00:12:18.084 =========== 00:12:18.084 Arbitration Burst: no limit 00:12:18.084 00:12:18.084 Power Management 00:12:18.084 ================ 00:12:18.084 Number of Power States: 1 00:12:18.084 Current Power State: Power State #0 00:12:18.084 Power State #0: 00:12:18.084 Max Power: 25.00 W 00:12:18.084 Non-Operational State: Operational 00:12:18.084 Entry Latency: 16 microseconds 00:12:18.084 Exit Latency: 4 microseconds 00:12:18.084 Relative Read Throughput: 0 00:12:18.084 Relative Read Latency: 0 00:12:18.084 Relative Write Throughput: 0 00:12:18.084 Relative Write Latency: 0 00:12:18.084 Idle Power: Not Reported 00:12:18.084 Active Power: Not Reported 00:12:18.084 Non-Operational Permissive Mode: Not Supported 00:12:18.084 00:12:18.084 Health Information 00:12:18.084 ================== 00:12:18.084 Critical Warnings: 00:12:18.084 Available Spare Space: OK 00:12:18.084 Temperature: OK 00:12:18.084 Device Reliability: OK 00:12:18.084 Read Only: No 00:12:18.084 Volatile Memory Backup: OK 00:12:18.084 Current Temperature: 323 Kelvin (50 Celsius) 00:12:18.084 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:18.084 Available Spare: 0% 00:12:18.084 Available Spare Threshold: 0% 00:12:18.084 Life Percentage Used: 0% 00:12:18.084 Data Units Read: 1167 00:12:18.084 Data Units Written: 1034 00:12:18.084 Host Read Commands: 54183 00:12:18.085 Host Write Commands: 52966 00:12:18.085 Controller Busy Time: 0 minutes 00:12:18.085 Power Cycles: 0 00:12:18.085 Power On Hours: 0 hours 00:12:18.085 Unsafe Shutdowns: 0 00:12:18.085 Unrecoverable Media Errors: 0 00:12:18.085 Lifetime Error Log Entries: 0 00:12:18.085 Warning Temperature Time: 0 minutes 00:12:18.085 Critical Temperature Time: 0 minutes 00:12:18.085 00:12:18.085 Number of Queues 00:12:18.085 ================ 00:12:18.085 Number of I/O Submission Queues: 64 00:12:18.085 Number of I/O Completion Queues: 64 00:12:18.085 00:12:18.085 ZNS Specific Controller Data 00:12:18.085 ============================ 00:12:18.085 Zone Append Size Limit: 0 00:12:18.085 00:12:18.085 00:12:18.085 Active Namespaces 00:12:18.085 ================= 00:12:18.085 Namespace ID:1 00:12:18.085 Error Recovery Timeout: Unlimited 00:12:18.085 Command Set Identifier: NVM (00h) 00:12:18.085 Deallocate: Supported 00:12:18.085 Deallocated/Unwritten Error: Supported 00:12:18.085 Deallocated Read Value: All 0x00 00:12:18.085 Deallocate in Write Zeroes: Not Supported 00:12:18.085 Deallocated Guard Field: 0xFFFF 00:12:18.085 Flush: Supported 00:12:18.085 Reservation: Not Supported 00:12:18.085 Namespace Sharing Capabilities: Private 00:12:18.085 Size (in LBAs): 1310720 (5GiB) 00:12:18.085 Capacity (in LBAs): 1310720 (5GiB) 00:12:18.085 Utilization (in LBAs): 1310720 (5GiB) 00:12:18.085 Thin Provisioning: Not Supported 00:12:18.085 Per-NS Atomic Units: No 00:12:18.085 Maximum Single Source Range Length: 128 00:12:18.085 Maximum Copy Length: 128 00:12:18.085 Maximum Source Range Count: 128 00:12:18.085 NGUID/EUI64 Never Reused: No 00:12:18.085 Namespace Write Protected: No 00:12:18.085 Number of LBA Formats: 8 00:12:18.085 Current LBA Format: LBA Format #04 00:12:18.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.085 00:12:18.085 NVM Specific Namespace Data 00:12:18.085 =========================== 00:12:18.085 Logical Block Storage Tag Mask: 0 00:12:18.085 Protection Information Capabilities: 00:12:18.085 16b Guard Protection Information Storage Tag Support: No 00:12:18.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.085 Storage Tag Check Read Support: No 00:12:18.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.085 20:30:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:18.085 20:30:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:18.345 ===================================================== 00:12:18.345 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:18.345 ===================================================== 00:12:18.345 Controller Capabilities/Features 00:12:18.345 ================================ 00:12:18.345 Vendor ID: 1b36 00:12:18.345 Subsystem Vendor ID: 1af4 00:12:18.345 Serial Number: 12342 00:12:18.345 Model Number: QEMU NVMe Ctrl 00:12:18.345 Firmware Version: 8.0.0 00:12:18.345 Recommended Arb Burst: 6 00:12:18.345 IEEE OUI Identifier: 00 54 52 00:12:18.345 Multi-path I/O 00:12:18.345 May have multiple subsystem ports: No 00:12:18.345 May have multiple controllers: No 00:12:18.345 Associated with SR-IOV VF: No 00:12:18.345 Max Data Transfer Size: 524288 00:12:18.345 Max Number of Namespaces: 256 00:12:18.345 Max Number of I/O Queues: 64 00:12:18.345 NVMe Specification Version (VS): 1.4 00:12:18.345 NVMe Specification Version (Identify): 1.4 00:12:18.345 Maximum Queue Entries: 2048 00:12:18.345 Contiguous Queues Required: Yes 00:12:18.345 Arbitration Mechanisms Supported 00:12:18.345 Weighted Round Robin: Not Supported 00:12:18.345 Vendor Specific: Not Supported 00:12:18.345 Reset Timeout: 7500 ms 00:12:18.345 Doorbell Stride: 4 bytes 00:12:18.345 NVM Subsystem Reset: Not Supported 00:12:18.345 Command Sets Supported 00:12:18.345 NVM Command Set: Supported 00:12:18.345 Boot Partition: Not Supported 00:12:18.345 Memory Page Size Minimum: 4096 bytes 00:12:18.345 Memory Page Size Maximum: 65536 bytes 00:12:18.345 Persistent Memory Region: Not Supported 00:12:18.345 Optional Asynchronous Events Supported 00:12:18.345 Namespace Attribute Notices: Supported 00:12:18.345 Firmware Activation Notices: Not Supported 00:12:18.345 ANA Change Notices: Not Supported 00:12:18.345 PLE Aggregate Log Change Notices: Not Supported 00:12:18.346 LBA Status Info Alert Notices: Not Supported 00:12:18.346 EGE Aggregate Log Change Notices: Not Supported 00:12:18.346 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.346 Zone Descriptor Change Notices: Not Supported 00:12:18.346 Discovery Log Change Notices: Not Supported 00:12:18.346 Controller Attributes 00:12:18.346 128-bit Host Identifier: Not Supported 00:12:18.346 Non-Operational Permissive Mode: Not Supported 00:12:18.346 NVM Sets: Not Supported 00:12:18.346 Read Recovery Levels: Not Supported 00:12:18.346 Endurance Groups: Not Supported 00:12:18.346 Predictable Latency Mode: Not Supported 00:12:18.346 Traffic Based Keep ALive: Not Supported 00:12:18.346 Namespace Granularity: Not Supported 00:12:18.346 SQ Associations: Not Supported 00:12:18.346 UUID List: Not Supported 00:12:18.346 Multi-Domain Subsystem: Not Supported 00:12:18.346 Fixed Capacity Management: Not Supported 00:12:18.346 Variable Capacity Management: Not Supported 00:12:18.346 Delete Endurance Group: Not Supported 00:12:18.346 Delete NVM Set: Not Supported 00:12:18.346 Extended LBA Formats Supported: Supported 00:12:18.346 Flexible Data Placement Supported: Not Supported 00:12:18.346 00:12:18.346 Controller Memory Buffer Support 00:12:18.346 ================================ 00:12:18.346 Supported: No 00:12:18.346 00:12:18.346 Persistent Memory Region Support 00:12:18.346 ================================ 00:12:18.346 Supported: No 00:12:18.346 00:12:18.346 Admin Command Set Attributes 00:12:18.346 ============================ 00:12:18.346 Security Send/Receive: Not Supported 00:12:18.346 Format NVM: Supported 00:12:18.346 Firmware Activate/Download: Not Supported 00:12:18.346 Namespace Management: Supported 00:12:18.346 Device Self-Test: Not Supported 00:12:18.346 Directives: Supported 00:12:18.346 NVMe-MI: Not Supported 00:12:18.346 Virtualization Management: Not Supported 00:12:18.346 Doorbell Buffer Config: Supported 00:12:18.346 Get LBA Status Capability: Not Supported 00:12:18.346 Command & Feature Lockdown Capability: Not Supported 00:12:18.346 Abort Command Limit: 4 00:12:18.346 Async Event Request Limit: 4 00:12:18.346 Number of Firmware Slots: N/A 00:12:18.346 Firmware Slot 1 Read-Only: N/A 00:12:18.346 Firmware Activation Without Reset: N/A 00:12:18.346 Multiple Update Detection Support: N/A 00:12:18.346 Firmware Update Granularity: No Information Provided 00:12:18.346 Per-Namespace SMART Log: Yes 00:12:18.346 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.346 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:18.346 Command Effects Log Page: Supported 00:12:18.346 Get Log Page Extended Data: Supported 00:12:18.346 Telemetry Log Pages: Not Supported 00:12:18.346 Persistent Event Log Pages: Not Supported 00:12:18.346 Supported Log Pages Log Page: May Support 00:12:18.346 Commands Supported & Effects Log Page: Not Supported 00:12:18.346 Feature Identifiers & Effects Log Page:May Support 00:12:18.346 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.346 Data Area 4 for Telemetry Log: Not Supported 00:12:18.346 Error Log Page Entries Supported: 1 00:12:18.346 Keep Alive: Not Supported 00:12:18.346 00:12:18.346 NVM Command Set Attributes 00:12:18.346 ========================== 00:12:18.346 Submission Queue Entry Size 00:12:18.346 Max: 64 00:12:18.346 Min: 64 00:12:18.346 Completion Queue Entry Size 00:12:18.346 Max: 16 00:12:18.346 Min: 16 00:12:18.346 Number of Namespaces: 256 00:12:18.346 Compare Command: Supported 00:12:18.346 Write Uncorrectable Command: Not Supported 00:12:18.346 Dataset Management Command: Supported 00:12:18.346 Write Zeroes Command: Supported 00:12:18.346 Set Features Save Field: Supported 00:12:18.346 Reservations: Not Supported 00:12:18.346 Timestamp: Supported 00:12:18.346 Copy: Supported 00:12:18.346 Volatile Write Cache: Present 00:12:18.346 Atomic Write Unit (Normal): 1 00:12:18.346 Atomic Write Unit (PFail): 1 00:12:18.346 Atomic Compare & Write Unit: 1 00:12:18.346 Fused Compare & Write: Not Supported 00:12:18.346 Scatter-Gather List 00:12:18.346 SGL Command Set: Supported 00:12:18.346 SGL Keyed: Not Supported 00:12:18.346 SGL Bit Bucket Descriptor: Not Supported 00:12:18.346 SGL Metadata Pointer: Not Supported 00:12:18.346 Oversized SGL: Not Supported 00:12:18.346 SGL Metadata Address: Not Supported 00:12:18.346 SGL Offset: Not Supported 00:12:18.346 Transport SGL Data Block: Not Supported 00:12:18.346 Replay Protected Memory Block: Not Supported 00:12:18.346 00:12:18.346 Firmware Slot Information 00:12:18.346 ========================= 00:12:18.346 Active slot: 1 00:12:18.346 Slot 1 Firmware Revision: 1.0 00:12:18.346 00:12:18.346 00:12:18.346 Commands Supported and Effects 00:12:18.346 ============================== 00:12:18.346 Admin Commands 00:12:18.346 -------------- 00:12:18.346 Delete I/O Submission Queue (00h): Supported 00:12:18.346 Create I/O Submission Queue (01h): Supported 00:12:18.346 Get Log Page (02h): Supported 00:12:18.346 Delete I/O Completion Queue (04h): Supported 00:12:18.346 Create I/O Completion Queue (05h): Supported 00:12:18.346 Identify (06h): Supported 00:12:18.346 Abort (08h): Supported 00:12:18.346 Set Features (09h): Supported 00:12:18.346 Get Features (0Ah): Supported 00:12:18.346 Asynchronous Event Request (0Ch): Supported 00:12:18.346 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:18.346 Directive Send (19h): Supported 00:12:18.346 Directive Receive (1Ah): Supported 00:12:18.346 Virtualization Management (1Ch): Supported 00:12:18.346 Doorbell Buffer Config (7Ch): Supported 00:12:18.346 Format NVM (80h): Supported LBA-Change 00:12:18.346 I/O Commands 00:12:18.346 ------------ 00:12:18.346 Flush (00h): Supported LBA-Change 00:12:18.346 Write (01h): Supported LBA-Change 00:12:18.346 Read (02h): Supported 00:12:18.346 Compare (05h): Supported 00:12:18.346 Write Zeroes (08h): Supported LBA-Change 00:12:18.346 Dataset Management (09h): Supported LBA-Change 00:12:18.346 Unknown (0Ch): Supported 00:12:18.346 Unknown (12h): Supported 00:12:18.346 Copy (19h): Supported LBA-Change 00:12:18.346 Unknown (1Dh): Supported LBA-Change 00:12:18.346 00:12:18.346 Error Log 00:12:18.346 ========= 00:12:18.346 00:12:18.346 Arbitration 00:12:18.346 =========== 00:12:18.346 Arbitration Burst: no limit 00:12:18.346 00:12:18.346 Power Management 00:12:18.346 ================ 00:12:18.346 Number of Power States: 1 00:12:18.346 Current Power State: Power State #0 00:12:18.346 Power State #0: 00:12:18.346 Max Power: 25.00 W 00:12:18.346 Non-Operational State: Operational 00:12:18.346 Entry Latency: 16 microseconds 00:12:18.346 Exit Latency: 4 microseconds 00:12:18.346 Relative Read Throughput: 0 00:12:18.346 Relative Read Latency: 0 00:12:18.346 Relative Write Throughput: 0 00:12:18.346 Relative Write Latency: 0 00:12:18.346 Idle Power: Not Reported 00:12:18.346 Active Power: Not Reported 00:12:18.346 Non-Operational Permissive Mode: Not Supported 00:12:18.346 00:12:18.346 Health Information 00:12:18.346 ================== 00:12:18.346 Critical Warnings: 00:12:18.346 Available Spare Space: OK 00:12:18.346 Temperature: OK 00:12:18.346 Device Reliability: OK 00:12:18.346 Read Only: No 00:12:18.346 Volatile Memory Backup: OK 00:12:18.346 Current Temperature: 323 Kelvin (50 Celsius) 00:12:18.346 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:18.346 Available Spare: 0% 00:12:18.346 Available Spare Threshold: 0% 00:12:18.346 Life Percentage Used: 0% 00:12:18.346 Data Units Read: 2377 00:12:18.346 Data Units Written: 2164 00:12:18.346 Host Read Commands: 107489 00:12:18.346 Host Write Commands: 105758 00:12:18.346 Controller Busy Time: 0 minutes 00:12:18.346 Power Cycles: 0 00:12:18.346 Power On Hours: 0 hours 00:12:18.346 Unsafe Shutdowns: 0 00:12:18.346 Unrecoverable Media Errors: 0 00:12:18.346 Lifetime Error Log Entries: 0 00:12:18.346 Warning Temperature Time: 0 minutes 00:12:18.346 Critical Temperature Time: 0 minutes 00:12:18.346 00:12:18.346 Number of Queues 00:12:18.346 ================ 00:12:18.346 Number of I/O Submission Queues: 64 00:12:18.346 Number of I/O Completion Queues: 64 00:12:18.346 00:12:18.346 ZNS Specific Controller Data 00:12:18.346 ============================ 00:12:18.346 Zone Append Size Limit: 0 00:12:18.346 00:12:18.346 00:12:18.346 Active Namespaces 00:12:18.346 ================= 00:12:18.346 Namespace ID:1 00:12:18.346 Error Recovery Timeout: Unlimited 00:12:18.346 Command Set Identifier: NVM (00h) 00:12:18.346 Deallocate: Supported 00:12:18.346 Deallocated/Unwritten Error: Supported 00:12:18.346 Deallocated Read Value: All 0x00 00:12:18.346 Deallocate in Write Zeroes: Not Supported 00:12:18.346 Deallocated Guard Field: 0xFFFF 00:12:18.346 Flush: Supported 00:12:18.347 Reservation: Not Supported 00:12:18.347 Namespace Sharing Capabilities: Private 00:12:18.347 Size (in LBAs): 1048576 (4GiB) 00:12:18.347 Capacity (in LBAs): 1048576 (4GiB) 00:12:18.347 Utilization (in LBAs): 1048576 (4GiB) 00:12:18.347 Thin Provisioning: Not Supported 00:12:18.347 Per-NS Atomic Units: No 00:12:18.347 Maximum Single Source Range Length: 128 00:12:18.347 Maximum Copy Length: 128 00:12:18.347 Maximum Source Range Count: 128 00:12:18.347 NGUID/EUI64 Never Reused: No 00:12:18.347 Namespace Write Protected: No 00:12:18.347 Number of LBA Formats: 8 00:12:18.347 Current LBA Format: LBA Format #04 00:12:18.347 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.347 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.347 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.347 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.347 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.347 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.347 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.347 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.347 00:12:18.347 NVM Specific Namespace Data 00:12:18.347 =========================== 00:12:18.347 Logical Block Storage Tag Mask: 0 00:12:18.347 Protection Information Capabilities: 00:12:18.347 16b Guard Protection Information Storage Tag Support: No 00:12:18.347 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.347 Storage Tag Check Read Support: No 00:12:18.347 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Namespace ID:2 00:12:18.347 Error Recovery Timeout: Unlimited 00:12:18.347 Command Set Identifier: NVM (00h) 00:12:18.347 Deallocate: Supported 00:12:18.347 Deallocated/Unwritten Error: Supported 00:12:18.347 Deallocated Read Value: All 0x00 00:12:18.347 Deallocate in Write Zeroes: Not Supported 00:12:18.347 Deallocated Guard Field: 0xFFFF 00:12:18.347 Flush: Supported 00:12:18.347 Reservation: Not Supported 00:12:18.347 Namespace Sharing Capabilities: Private 00:12:18.347 Size (in LBAs): 1048576 (4GiB) 00:12:18.347 Capacity (in LBAs): 1048576 (4GiB) 00:12:18.347 Utilization (in LBAs): 1048576 (4GiB) 00:12:18.347 Thin Provisioning: Not Supported 00:12:18.347 Per-NS Atomic Units: No 00:12:18.347 Maximum Single Source Range Length: 128 00:12:18.347 Maximum Copy Length: 128 00:12:18.347 Maximum Source Range Count: 128 00:12:18.347 NGUID/EUI64 Never Reused: No 00:12:18.347 Namespace Write Protected: No 00:12:18.347 Number of LBA Formats: 8 00:12:18.347 Current LBA Format: LBA Format #04 00:12:18.347 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.347 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.347 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.347 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.347 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.347 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.347 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.347 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.347 00:12:18.347 NVM Specific Namespace Data 00:12:18.347 =========================== 00:12:18.347 Logical Block Storage Tag Mask: 0 00:12:18.347 Protection Information Capabilities: 00:12:18.347 16b Guard Protection Information Storage Tag Support: No 00:12:18.347 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.347 Storage Tag Check Read Support: No 00:12:18.347 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Namespace ID:3 00:12:18.347 Error Recovery Timeout: Unlimited 00:12:18.347 Command Set Identifier: NVM (00h) 00:12:18.347 Deallocate: Supported 00:12:18.347 Deallocated/Unwritten Error: Supported 00:12:18.347 Deallocated Read Value: All 0x00 00:12:18.347 Deallocate in Write Zeroes: Not Supported 00:12:18.347 Deallocated Guard Field: 0xFFFF 00:12:18.347 Flush: Supported 00:12:18.347 Reservation: Not Supported 00:12:18.347 Namespace Sharing Capabilities: Private 00:12:18.347 Size (in LBAs): 1048576 (4GiB) 00:12:18.347 Capacity (in LBAs): 1048576 (4GiB) 00:12:18.347 Utilization (in LBAs): 1048576 (4GiB) 00:12:18.347 Thin Provisioning: Not Supported 00:12:18.347 Per-NS Atomic Units: No 00:12:18.347 Maximum Single Source Range Length: 128 00:12:18.347 Maximum Copy Length: 128 00:12:18.347 Maximum Source Range Count: 128 00:12:18.347 NGUID/EUI64 Never Reused: No 00:12:18.347 Namespace Write Protected: No 00:12:18.347 Number of LBA Formats: 8 00:12:18.347 Current LBA Format: LBA Format #04 00:12:18.347 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.347 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.347 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.347 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.347 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.347 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.347 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.347 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.347 00:12:18.347 NVM Specific Namespace Data 00:12:18.347 =========================== 00:12:18.347 Logical Block Storage Tag Mask: 0 00:12:18.347 Protection Information Capabilities: 00:12:18.347 16b Guard Protection Information Storage Tag Support: No 00:12:18.347 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.347 Storage Tag Check Read Support: No 00:12:18.347 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.347 20:30:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:18.347 20:30:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:18.607 ===================================================== 00:12:18.607 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:18.607 ===================================================== 00:12:18.607 Controller Capabilities/Features 00:12:18.607 ================================ 00:12:18.607 Vendor ID: 1b36 00:12:18.607 Subsystem Vendor ID: 1af4 00:12:18.607 Serial Number: 12343 00:12:18.607 Model Number: QEMU NVMe Ctrl 00:12:18.607 Firmware Version: 8.0.0 00:12:18.607 Recommended Arb Burst: 6 00:12:18.607 IEEE OUI Identifier: 00 54 52 00:12:18.607 Multi-path I/O 00:12:18.607 May have multiple subsystem ports: No 00:12:18.607 May have multiple controllers: Yes 00:12:18.607 Associated with SR-IOV VF: No 00:12:18.607 Max Data Transfer Size: 524288 00:12:18.607 Max Number of Namespaces: 256 00:12:18.607 Max Number of I/O Queues: 64 00:12:18.607 NVMe Specification Version (VS): 1.4 00:12:18.607 NVMe Specification Version (Identify): 1.4 00:12:18.607 Maximum Queue Entries: 2048 00:12:18.607 Contiguous Queues Required: Yes 00:12:18.607 Arbitration Mechanisms Supported 00:12:18.607 Weighted Round Robin: Not Supported 00:12:18.607 Vendor Specific: Not Supported 00:12:18.607 Reset Timeout: 7500 ms 00:12:18.607 Doorbell Stride: 4 bytes 00:12:18.607 NVM Subsystem Reset: Not Supported 00:12:18.607 Command Sets Supported 00:12:18.607 NVM Command Set: Supported 00:12:18.607 Boot Partition: Not Supported 00:12:18.607 Memory Page Size Minimum: 4096 bytes 00:12:18.607 Memory Page Size Maximum: 65536 bytes 00:12:18.607 Persistent Memory Region: Not Supported 00:12:18.607 Optional Asynchronous Events Supported 00:12:18.607 Namespace Attribute Notices: Supported 00:12:18.607 Firmware Activation Notices: Not Supported 00:12:18.607 ANA Change Notices: Not Supported 00:12:18.607 PLE Aggregate Log Change Notices: Not Supported 00:12:18.607 LBA Status Info Alert Notices: Not Supported 00:12:18.607 EGE Aggregate Log Change Notices: Not Supported 00:12:18.607 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.607 Zone Descriptor Change Notices: Not Supported 00:12:18.607 Discovery Log Change Notices: Not Supported 00:12:18.607 Controller Attributes 00:12:18.607 128-bit Host Identifier: Not Supported 00:12:18.607 Non-Operational Permissive Mode: Not Supported 00:12:18.607 NVM Sets: Not Supported 00:12:18.608 Read Recovery Levels: Not Supported 00:12:18.608 Endurance Groups: Supported 00:12:18.608 Predictable Latency Mode: Not Supported 00:12:18.608 Traffic Based Keep ALive: Not Supported 00:12:18.608 Namespace Granularity: Not Supported 00:12:18.608 SQ Associations: Not Supported 00:12:18.608 UUID List: Not Supported 00:12:18.608 Multi-Domain Subsystem: Not Supported 00:12:18.608 Fixed Capacity Management: Not Supported 00:12:18.608 Variable Capacity Management: Not Supported 00:12:18.608 Delete Endurance Group: Not Supported 00:12:18.608 Delete NVM Set: Not Supported 00:12:18.608 Extended LBA Formats Supported: Supported 00:12:18.608 Flexible Data Placement Supported: Supported 00:12:18.608 00:12:18.608 Controller Memory Buffer Support 00:12:18.608 ================================ 00:12:18.608 Supported: No 00:12:18.608 00:12:18.608 Persistent Memory Region Support 00:12:18.608 ================================ 00:12:18.608 Supported: No 00:12:18.608 00:12:18.608 Admin Command Set Attributes 00:12:18.608 ============================ 00:12:18.608 Security Send/Receive: Not Supported 00:12:18.608 Format NVM: Supported 00:12:18.608 Firmware Activate/Download: Not Supported 00:12:18.608 Namespace Management: Supported 00:12:18.608 Device Self-Test: Not Supported 00:12:18.608 Directives: Supported 00:12:18.608 NVMe-MI: Not Supported 00:12:18.608 Virtualization Management: Not Supported 00:12:18.608 Doorbell Buffer Config: Supported 00:12:18.608 Get LBA Status Capability: Not Supported 00:12:18.608 Command & Feature Lockdown Capability: Not Supported 00:12:18.608 Abort Command Limit: 4 00:12:18.608 Async Event Request Limit: 4 00:12:18.608 Number of Firmware Slots: N/A 00:12:18.608 Firmware Slot 1 Read-Only: N/A 00:12:18.608 Firmware Activation Without Reset: N/A 00:12:18.608 Multiple Update Detection Support: N/A 00:12:18.608 Firmware Update Granularity: No Information Provided 00:12:18.608 Per-Namespace SMART Log: Yes 00:12:18.608 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.608 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:18.608 Command Effects Log Page: Supported 00:12:18.608 Get Log Page Extended Data: Supported 00:12:18.608 Telemetry Log Pages: Not Supported 00:12:18.608 Persistent Event Log Pages: Not Supported 00:12:18.608 Supported Log Pages Log Page: May Support 00:12:18.608 Commands Supported & Effects Log Page: Not Supported 00:12:18.608 Feature Identifiers & Effects Log Page:May Support 00:12:18.608 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.608 Data Area 4 for Telemetry Log: Not Supported 00:12:18.608 Error Log Page Entries Supported: 1 00:12:18.608 Keep Alive: Not Supported 00:12:18.608 00:12:18.608 NVM Command Set Attributes 00:12:18.608 ========================== 00:12:18.608 Submission Queue Entry Size 00:12:18.608 Max: 64 00:12:18.608 Min: 64 00:12:18.608 Completion Queue Entry Size 00:12:18.608 Max: 16 00:12:18.608 Min: 16 00:12:18.608 Number of Namespaces: 256 00:12:18.608 Compare Command: Supported 00:12:18.608 Write Uncorrectable Command: Not Supported 00:12:18.608 Dataset Management Command: Supported 00:12:18.608 Write Zeroes Command: Supported 00:12:18.608 Set Features Save Field: Supported 00:12:18.608 Reservations: Not Supported 00:12:18.608 Timestamp: Supported 00:12:18.608 Copy: Supported 00:12:18.608 Volatile Write Cache: Present 00:12:18.608 Atomic Write Unit (Normal): 1 00:12:18.608 Atomic Write Unit (PFail): 1 00:12:18.608 Atomic Compare & Write Unit: 1 00:12:18.608 Fused Compare & Write: Not Supported 00:12:18.608 Scatter-Gather List 00:12:18.608 SGL Command Set: Supported 00:12:18.608 SGL Keyed: Not Supported 00:12:18.608 SGL Bit Bucket Descriptor: Not Supported 00:12:18.608 SGL Metadata Pointer: Not Supported 00:12:18.608 Oversized SGL: Not Supported 00:12:18.608 SGL Metadata Address: Not Supported 00:12:18.608 SGL Offset: Not Supported 00:12:18.608 Transport SGL Data Block: Not Supported 00:12:18.608 Replay Protected Memory Block: Not Supported 00:12:18.608 00:12:18.608 Firmware Slot Information 00:12:18.608 ========================= 00:12:18.608 Active slot: 1 00:12:18.608 Slot 1 Firmware Revision: 1.0 00:12:18.608 00:12:18.608 00:12:18.608 Commands Supported and Effects 00:12:18.608 ============================== 00:12:18.608 Admin Commands 00:12:18.608 -------------- 00:12:18.608 Delete I/O Submission Queue (00h): Supported 00:12:18.608 Create I/O Submission Queue (01h): Supported 00:12:18.608 Get Log Page (02h): Supported 00:12:18.608 Delete I/O Completion Queue (04h): Supported 00:12:18.608 Create I/O Completion Queue (05h): Supported 00:12:18.608 Identify (06h): Supported 00:12:18.608 Abort (08h): Supported 00:12:18.608 Set Features (09h): Supported 00:12:18.608 Get Features (0Ah): Supported 00:12:18.608 Asynchronous Event Request (0Ch): Supported 00:12:18.608 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:18.608 Directive Send (19h): Supported 00:12:18.608 Directive Receive (1Ah): Supported 00:12:18.608 Virtualization Management (1Ch): Supported 00:12:18.608 Doorbell Buffer Config (7Ch): Supported 00:12:18.608 Format NVM (80h): Supported LBA-Change 00:12:18.608 I/O Commands 00:12:18.608 ------------ 00:12:18.608 Flush (00h): Supported LBA-Change 00:12:18.608 Write (01h): Supported LBA-Change 00:12:18.608 Read (02h): Supported 00:12:18.608 Compare (05h): Supported 00:12:18.608 Write Zeroes (08h): Supported LBA-Change 00:12:18.608 Dataset Management (09h): Supported LBA-Change 00:12:18.608 Unknown (0Ch): Supported 00:12:18.608 Unknown (12h): Supported 00:12:18.608 Copy (19h): Supported LBA-Change 00:12:18.608 Unknown (1Dh): Supported LBA-Change 00:12:18.608 00:12:18.608 Error Log 00:12:18.608 ========= 00:12:18.608 00:12:18.608 Arbitration 00:12:18.608 =========== 00:12:18.608 Arbitration Burst: no limit 00:12:18.608 00:12:18.608 Power Management 00:12:18.608 ================ 00:12:18.608 Number of Power States: 1 00:12:18.608 Current Power State: Power State #0 00:12:18.608 Power State #0: 00:12:18.608 Max Power: 25.00 W 00:12:18.608 Non-Operational State: Operational 00:12:18.608 Entry Latency: 16 microseconds 00:12:18.608 Exit Latency: 4 microseconds 00:12:18.608 Relative Read Throughput: 0 00:12:18.608 Relative Read Latency: 0 00:12:18.608 Relative Write Throughput: 0 00:12:18.608 Relative Write Latency: 0 00:12:18.608 Idle Power: Not Reported 00:12:18.608 Active Power: Not Reported 00:12:18.608 Non-Operational Permissive Mode: Not Supported 00:12:18.608 00:12:18.608 Health Information 00:12:18.608 ================== 00:12:18.608 Critical Warnings: 00:12:18.608 Available Spare Space: OK 00:12:18.608 Temperature: OK 00:12:18.608 Device Reliability: OK 00:12:18.608 Read Only: No 00:12:18.608 Volatile Memory Backup: OK 00:12:18.608 Current Temperature: 323 Kelvin (50 Celsius) 00:12:18.608 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:18.608 Available Spare: 0% 00:12:18.608 Available Spare Threshold: 0% 00:12:18.608 Life Percentage Used: 0% 00:12:18.608 Data Units Read: 876 00:12:18.608 Data Units Written: 805 00:12:18.608 Host Read Commands: 36681 00:12:18.608 Host Write Commands: 36104 00:12:18.608 Controller Busy Time: 0 minutes 00:12:18.608 Power Cycles: 0 00:12:18.608 Power On Hours: 0 hours 00:12:18.608 Unsafe Shutdowns: 0 00:12:18.608 Unrecoverable Media Errors: 0 00:12:18.608 Lifetime Error Log Entries: 0 00:12:18.608 Warning Temperature Time: 0 minutes 00:12:18.608 Critical Temperature Time: 0 minutes 00:12:18.608 00:12:18.608 Number of Queues 00:12:18.608 ================ 00:12:18.608 Number of I/O Submission Queues: 64 00:12:18.608 Number of I/O Completion Queues: 64 00:12:18.608 00:12:18.608 ZNS Specific Controller Data 00:12:18.608 ============================ 00:12:18.608 Zone Append Size Limit: 0 00:12:18.608 00:12:18.608 00:12:18.608 Active Namespaces 00:12:18.608 ================= 00:12:18.608 Namespace ID:1 00:12:18.608 Error Recovery Timeout: Unlimited 00:12:18.608 Command Set Identifier: NVM (00h) 00:12:18.608 Deallocate: Supported 00:12:18.608 Deallocated/Unwritten Error: Supported 00:12:18.608 Deallocated Read Value: All 0x00 00:12:18.608 Deallocate in Write Zeroes: Not Supported 00:12:18.608 Deallocated Guard Field: 0xFFFF 00:12:18.608 Flush: Supported 00:12:18.608 Reservation: Not Supported 00:12:18.608 Namespace Sharing Capabilities: Multiple Controllers 00:12:18.608 Size (in LBAs): 262144 (1GiB) 00:12:18.608 Capacity (in LBAs): 262144 (1GiB) 00:12:18.608 Utilization (in LBAs): 262144 (1GiB) 00:12:18.608 Thin Provisioning: Not Supported 00:12:18.608 Per-NS Atomic Units: No 00:12:18.609 Maximum Single Source Range Length: 128 00:12:18.609 Maximum Copy Length: 128 00:12:18.609 Maximum Source Range Count: 128 00:12:18.609 NGUID/EUI64 Never Reused: No 00:12:18.609 Namespace Write Protected: No 00:12:18.609 Endurance group ID: 1 00:12:18.609 Number of LBA Formats: 8 00:12:18.609 Current LBA Format: LBA Format #04 00:12:18.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.609 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.609 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.609 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.609 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.609 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.609 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.609 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.609 00:12:18.609 Get Feature FDP: 00:12:18.609 ================ 00:12:18.609 Enabled: Yes 00:12:18.609 FDP configuration index: 0 00:12:18.609 00:12:18.609 FDP configurations log page 00:12:18.609 =========================== 00:12:18.609 Number of FDP configurations: 1 00:12:18.609 Version: 0 00:12:18.609 Size: 112 00:12:18.609 FDP Configuration Descriptor: 0 00:12:18.609 Descriptor Size: 96 00:12:18.609 Reclaim Group Identifier format: 2 00:12:18.609 FDP Volatile Write Cache: Not Present 00:12:18.609 FDP Configuration: Valid 00:12:18.609 Vendor Specific Size: 0 00:12:18.609 Number of Reclaim Groups: 2 00:12:18.609 Number of Recalim Unit Handles: 8 00:12:18.609 Max Placement Identifiers: 128 00:12:18.609 Number of Namespaces Suppprted: 256 00:12:18.609 Reclaim unit Nominal Size: 6000000 bytes 00:12:18.609 Estimated Reclaim Unit Time Limit: Not Reported 00:12:18.609 RUH Desc #000: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #001: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #002: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #003: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #004: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #005: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #006: RUH Type: Initially Isolated 00:12:18.609 RUH Desc #007: RUH Type: Initially Isolated 00:12:18.609 00:12:18.609 FDP reclaim unit handle usage log page 00:12:18.868 ====================================== 00:12:18.868 Number of Reclaim Unit Handles: 8 00:12:18.868 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:18.868 RUH Usage Desc #001: RUH Attributes: Unused 00:12:18.868 RUH Usage Desc #002: RUH Attributes: Unused 00:12:18.868 RUH Usage Desc #003: RUH Attributes: Unused 00:12:18.868 RUH Usage Desc #004: RUH Attributes: Unused 00:12:18.868 RUH Usage Desc #005: RUH Attributes: Unused 00:12:18.868 RUH Usage Desc #006: RUH Attributes: Unused 00:12:18.868 RUH Usage Desc #007: RUH Attributes: Unused 00:12:18.868 00:12:18.868 FDP statistics log page 00:12:18.868 ======================= 00:12:18.868 Host bytes with metadata written: 524394496 00:12:18.868 Media bytes with metadata written: 524451840 00:12:18.868 Media bytes erased: 0 00:12:18.868 00:12:18.868 FDP events log page 00:12:18.868 =================== 00:12:18.868 Number of FDP events: 0 00:12:18.868 00:12:18.868 NVM Specific Namespace Data 00:12:18.868 =========================== 00:12:18.868 Logical Block Storage Tag Mask: 0 00:12:18.868 Protection Information Capabilities: 00:12:18.868 16b Guard Protection Information Storage Tag Support: No 00:12:18.868 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.868 Storage Tag Check Read Support: No 00:12:18.868 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.868 00:12:18.868 real 0m1.746s 00:12:18.868 user 0m0.638s 00:12:18.868 sys 0m0.904s 00:12:18.868 20:30:26 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.868 ************************************ 00:12:18.868 END TEST nvme_identify 00:12:18.868 ************************************ 00:12:18.868 20:30:26 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:18.868 20:30:26 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:18.868 20:30:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.868 20:30:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.868 20:30:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.868 ************************************ 00:12:18.868 START TEST nvme_perf 00:12:18.868 ************************************ 00:12:18.868 20:30:26 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:12:18.868 20:30:26 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:20.249 Initializing NVMe Controllers 00:12:20.249 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:20.249 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:20.249 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:20.249 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:20.249 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:20.249 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:20.249 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:20.249 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:20.249 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:20.249 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:20.249 Initialization complete. Launching workers. 00:12:20.249 ======================================================== 00:12:20.249 Latency(us) 00:12:20.249 Device Information : IOPS MiB/s Average min max 00:12:20.249 PCIE (0000:00:10.0) NSID 1 from core 0: 14129.22 165.58 9079.80 7012.76 51025.00 00:12:20.249 PCIE (0000:00:11.0) NSID 1 from core 0: 14129.22 165.58 9063.87 7131.16 48691.20 00:12:20.249 PCIE (0000:00:13.0) NSID 1 from core 0: 14129.22 165.58 9045.59 4001.64 46909.01 00:12:20.249 PCIE (0000:00:12.0) NSID 1 from core 0: 14129.22 165.58 9028.00 7125.85 44567.93 00:12:20.249 PCIE (0000:00:12.0) NSID 2 from core 0: 14129.22 165.58 9009.18 7157.13 42310.40 00:12:20.249 PCIE (0000:00:12.0) NSID 3 from core 0: 14193.15 166.33 8950.15 7246.27 34874.68 00:12:20.249 ======================================================== 00:12:20.249 Total : 84839.26 994.21 9029.37 4001.64 51025.00 00:12:20.249 00:12:20.249 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:20.249 ================================================================================= 00:12:20.249 1.00000% : 7474.789us 00:12:20.249 10.00000% : 8106.461us 00:12:20.249 25.00000% : 8369.658us 00:12:20.249 50.00000% : 8685.494us 00:12:20.249 75.00000% : 9001.330us 00:12:20.249 90.00000% : 9317.166us 00:12:20.249 95.00000% : 9633.002us 00:12:20.249 98.00000% : 12107.052us 00:12:20.249 99.00000% : 16002.365us 00:12:20.249 99.50000% : 44427.618us 00:12:20.249 99.90000% : 50744.341us 00:12:20.249 99.99000% : 51165.455us 00:12:20.249 99.99900% : 51165.455us 00:12:20.249 99.99990% : 51165.455us 00:12:20.249 99.99999% : 51165.455us 00:12:20.249 00:12:20.249 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:20.249 ================================================================================= 00:12:20.249 1.00000% : 7527.428us 00:12:20.249 10.00000% : 8159.100us 00:12:20.249 25.00000% : 8422.297us 00:12:20.249 50.00000% : 8685.494us 00:12:20.249 75.00000% : 9001.330us 00:12:20.249 90.00000% : 9264.527us 00:12:20.249 95.00000% : 9580.363us 00:12:20.249 98.00000% : 12633.446us 00:12:20.249 99.00000% : 16212.922us 00:12:20.249 99.50000% : 42111.486us 00:12:20.249 99.90000% : 48428.209us 00:12:20.249 99.99000% : 48849.324us 00:12:20.249 99.99900% : 48849.324us 00:12:20.249 99.99990% : 48849.324us 00:12:20.249 99.99999% : 48849.324us 00:12:20.249 00:12:20.249 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:20.249 ================================================================================= 00:12:20.249 1.00000% : 7685.346us 00:12:20.249 10.00000% : 8159.100us 00:12:20.249 25.00000% : 8422.297us 00:12:20.249 50.00000% : 8685.494us 00:12:20.249 75.00000% : 9001.330us 00:12:20.249 90.00000% : 9264.527us 00:12:20.249 95.00000% : 9527.724us 00:12:20.249 98.00000% : 12686.085us 00:12:20.249 99.00000% : 16528.758us 00:12:20.249 99.50000% : 40216.469us 00:12:20.249 99.90000% : 46533.192us 00:12:20.249 99.99000% : 46954.307us 00:12:20.249 99.99900% : 46954.307us 00:12:20.249 99.99990% : 46954.307us 00:12:20.249 99.99999% : 46954.307us 00:12:20.249 00:12:20.249 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:20.249 ================================================================================= 00:12:20.249 1.00000% : 7580.067us 00:12:20.249 10.00000% : 8159.100us 00:12:20.249 25.00000% : 8422.297us 00:12:20.249 50.00000% : 8685.494us 00:12:20.249 75.00000% : 9001.330us 00:12:20.249 90.00000% : 9264.527us 00:12:20.249 95.00000% : 9527.724us 00:12:20.249 98.00000% : 12844.003us 00:12:20.249 99.00000% : 16423.480us 00:12:20.249 99.50000% : 38110.895us 00:12:20.249 99.90000% : 44217.060us 00:12:20.249 99.99000% : 44638.175us 00:12:20.249 99.99900% : 44638.175us 00:12:20.249 99.99990% : 44638.175us 00:12:20.249 99.99999% : 44638.175us 00:12:20.249 00:12:20.249 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:20.249 ================================================================================= 00:12:20.249 1.00000% : 7580.067us 00:12:20.249 10.00000% : 8159.100us 00:12:20.249 25.00000% : 8369.658us 00:12:20.249 50.00000% : 8685.494us 00:12:20.249 75.00000% : 9001.330us 00:12:20.249 90.00000% : 9264.527us 00:12:20.249 95.00000% : 9580.363us 00:12:20.249 98.00000% : 13001.921us 00:12:20.249 99.00000% : 16002.365us 00:12:20.249 99.50000% : 35584.206us 00:12:20.249 99.90000% : 41900.929us 00:12:20.249 99.99000% : 42322.043us 00:12:20.249 99.99900% : 42322.043us 00:12:20.249 99.99990% : 42322.043us 00:12:20.249 99.99999% : 42322.043us 00:12:20.249 00:12:20.249 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:20.249 ================================================================================= 00:12:20.249 1.00000% : 7580.067us 00:12:20.249 10.00000% : 8159.100us 00:12:20.249 25.00000% : 8369.658us 00:12:20.249 50.00000% : 8685.494us 00:12:20.250 75.00000% : 9001.330us 00:12:20.250 90.00000% : 9264.527us 00:12:20.250 95.00000% : 9633.002us 00:12:20.250 98.00000% : 12949.282us 00:12:20.250 99.00000% : 15581.250us 00:12:20.250 99.50000% : 28425.253us 00:12:20.250 99.90000% : 34531.418us 00:12:20.250 99.99000% : 34952.533us 00:12:20.250 99.99900% : 34952.533us 00:12:20.250 99.99990% : 34952.533us 00:12:20.250 99.99999% : 34952.533us 00:12:20.250 00:12:20.250 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:20.250 ============================================================================== 00:12:20.250 Range in us Cumulative IO count 00:12:20.250 7001.035 - 7053.674: 0.0212% ( 3) 00:12:20.250 7053.674 - 7106.313: 0.0495% ( 4) 00:12:20.250 7106.313 - 7158.953: 0.1061% ( 8) 00:12:20.250 7158.953 - 7211.592: 0.1626% ( 8) 00:12:20.250 7211.592 - 7264.231: 0.2404% ( 11) 00:12:20.250 7264.231 - 7316.871: 0.3676% ( 18) 00:12:20.250 7316.871 - 7369.510: 0.5303% ( 23) 00:12:20.250 7369.510 - 7422.149: 0.8131% ( 40) 00:12:20.250 7422.149 - 7474.789: 1.1171% ( 43) 00:12:20.250 7474.789 - 7527.428: 1.4140% ( 42) 00:12:20.250 7527.428 - 7580.067: 1.8312% ( 59) 00:12:20.250 7580.067 - 7632.707: 2.1847% ( 50) 00:12:20.250 7632.707 - 7685.346: 2.6230% ( 62) 00:12:20.250 7685.346 - 7737.986: 3.1391% ( 73) 00:12:20.250 7737.986 - 7790.625: 3.6482% ( 72) 00:12:20.250 7790.625 - 7843.264: 4.3199% ( 95) 00:12:20.250 7843.264 - 7895.904: 5.0269% ( 100) 00:12:20.250 7895.904 - 7948.543: 5.9177% ( 126) 00:12:20.250 7948.543 - 8001.182: 7.0277% ( 157) 00:12:20.250 8001.182 - 8053.822: 8.5690% ( 218) 00:12:20.250 8053.822 - 8106.461: 10.7749% ( 312) 00:12:20.250 8106.461 - 8159.100: 13.2424% ( 349) 00:12:20.250 8159.100 - 8211.740: 16.1340% ( 409) 00:12:20.250 8211.740 - 8264.379: 19.5419% ( 482) 00:12:20.250 8264.379 - 8317.018: 23.2537% ( 525) 00:12:20.250 8317.018 - 8369.658: 27.1988% ( 558) 00:12:20.250 8369.658 - 8422.297: 31.3773% ( 591) 00:12:20.250 8422.297 - 8474.937: 35.5204% ( 586) 00:12:20.250 8474.937 - 8527.576: 39.5433% ( 569) 00:12:20.250 8527.576 - 8580.215: 43.4601% ( 554) 00:12:20.250 8580.215 - 8632.855: 47.4618% ( 566) 00:12:20.250 8632.855 - 8685.494: 51.4777% ( 568) 00:12:20.250 8685.494 - 8738.133: 55.4723% ( 565) 00:12:20.250 8738.133 - 8790.773: 59.5023% ( 570) 00:12:20.250 8790.773 - 8843.412: 63.4191% ( 554) 00:12:20.250 8843.412 - 8896.051: 67.3148% ( 551) 00:12:20.250 8896.051 - 8948.691: 71.2458% ( 556) 00:12:20.250 8948.691 - 9001.330: 75.1273% ( 549) 00:12:20.250 9001.330 - 9053.969: 78.9098% ( 535) 00:12:20.250 9053.969 - 9106.609: 82.3247% ( 483) 00:12:20.250 9106.609 - 9159.248: 85.3436% ( 427) 00:12:20.250 9159.248 - 9211.888: 87.8182% ( 350) 00:12:20.250 9211.888 - 9264.527: 89.6493% ( 259) 00:12:20.250 9264.527 - 9317.166: 91.1128% ( 207) 00:12:20.250 9317.166 - 9369.806: 92.2794% ( 165) 00:12:20.250 9369.806 - 9422.445: 93.1420% ( 122) 00:12:20.250 9422.445 - 9475.084: 93.8419% ( 99) 00:12:20.250 9475.084 - 9527.724: 94.4287% ( 83) 00:12:20.250 9527.724 - 9580.363: 94.8317% ( 57) 00:12:20.250 9580.363 - 9633.002: 95.2347% ( 57) 00:12:20.250 9633.002 - 9685.642: 95.5458% ( 44) 00:12:20.250 9685.642 - 9738.281: 95.8145% ( 38) 00:12:20.250 9738.281 - 9790.920: 95.9912% ( 25) 00:12:20.250 9790.920 - 9843.560: 96.1680% ( 25) 00:12:20.250 9843.560 - 9896.199: 96.3023% ( 19) 00:12:20.250 9896.199 - 9948.839: 96.3660% ( 9) 00:12:20.250 9948.839 - 10001.478: 96.4154% ( 7) 00:12:20.250 10001.478 - 10054.117: 96.4508% ( 5) 00:12:20.250 10054.117 - 10106.757: 96.4720% ( 3) 00:12:20.250 10106.757 - 10159.396: 96.5215% ( 7) 00:12:20.250 10159.396 - 10212.035: 96.5568% ( 5) 00:12:20.250 10212.035 - 10264.675: 96.5922% ( 5) 00:12:20.250 10264.675 - 10317.314: 96.6134% ( 3) 00:12:20.250 10317.314 - 10369.953: 96.6417% ( 4) 00:12:20.250 10369.953 - 10422.593: 96.6912% ( 7) 00:12:20.250 10422.593 - 10475.232: 96.7831% ( 13) 00:12:20.250 10475.232 - 10527.871: 96.8396% ( 8) 00:12:20.250 10527.871 - 10580.511: 96.8962% ( 8) 00:12:20.250 10580.511 - 10633.150: 96.9598% ( 9) 00:12:20.250 10633.150 - 10685.790: 97.0164% ( 8) 00:12:20.250 10685.790 - 10738.429: 97.0588% ( 6) 00:12:20.250 10738.429 - 10791.068: 97.1083% ( 7) 00:12:20.250 10791.068 - 10843.708: 97.1437% ( 5) 00:12:20.250 10843.708 - 10896.347: 97.1790% ( 5) 00:12:20.250 10896.347 - 10948.986: 97.2285% ( 7) 00:12:20.250 10948.986 - 11001.626: 97.2639% ( 5) 00:12:20.250 11001.626 - 11054.265: 97.2992% ( 5) 00:12:20.250 11054.265 - 11106.904: 97.3346% ( 5) 00:12:20.250 11106.904 - 11159.544: 97.3840% ( 7) 00:12:20.250 11159.544 - 11212.183: 97.4123% ( 4) 00:12:20.250 11212.183 - 11264.822: 97.4618% ( 7) 00:12:20.250 11264.822 - 11317.462: 97.4830% ( 3) 00:12:20.250 11317.462 - 11370.101: 97.5255% ( 6) 00:12:20.250 11370.101 - 11422.741: 97.5679% ( 6) 00:12:20.250 11422.741 - 11475.380: 97.5962% ( 4) 00:12:20.250 11475.380 - 11528.019: 97.6456% ( 7) 00:12:20.250 11528.019 - 11580.659: 97.6669% ( 3) 00:12:20.250 11580.659 - 11633.298: 97.7163% ( 7) 00:12:20.250 11633.298 - 11685.937: 97.7376% ( 3) 00:12:20.250 11685.937 - 11738.577: 97.7870% ( 7) 00:12:20.250 11738.577 - 11791.216: 97.8295% ( 6) 00:12:20.250 11791.216 - 11843.855: 97.8577% ( 4) 00:12:20.250 11843.855 - 11896.495: 97.8790% ( 3) 00:12:20.250 11896.495 - 11949.134: 97.9214% ( 6) 00:12:20.250 11949.134 - 12001.773: 97.9638% ( 6) 00:12:20.250 12001.773 - 12054.413: 97.9992% ( 5) 00:12:20.250 12054.413 - 12107.052: 98.0416% ( 6) 00:12:20.250 12107.052 - 12159.692: 98.0699% ( 4) 00:12:20.250 12159.692 - 12212.331: 98.1123% ( 6) 00:12:20.250 12212.331 - 12264.970: 98.1335% ( 3) 00:12:20.250 12264.970 - 12317.610: 98.1547% ( 3) 00:12:20.250 12317.610 - 12370.249: 98.1759% ( 3) 00:12:20.250 12370.249 - 12422.888: 98.1830% ( 1) 00:12:20.250 12422.888 - 12475.528: 98.1900% ( 1) 00:12:20.250 13107.200 - 13159.839: 98.2113% ( 3) 00:12:20.250 13212.479 - 13265.118: 98.2325% ( 3) 00:12:20.250 13265.118 - 13317.757: 98.2395% ( 1) 00:12:20.250 13317.757 - 13370.397: 98.2466% ( 1) 00:12:20.250 13370.397 - 13423.036: 98.2537% ( 1) 00:12:20.250 13423.036 - 13475.676: 98.2678% ( 2) 00:12:20.250 13475.676 - 13580.954: 98.2820% ( 2) 00:12:20.250 13580.954 - 13686.233: 98.3032% ( 3) 00:12:20.250 13686.233 - 13791.512: 98.3173% ( 2) 00:12:20.250 13791.512 - 13896.790: 98.3385% ( 3) 00:12:20.250 13896.790 - 14002.069: 98.3527% ( 2) 00:12:20.250 14002.069 - 14107.348: 98.3739% ( 3) 00:12:20.250 14107.348 - 14212.627: 98.4021% ( 4) 00:12:20.250 14212.627 - 14317.905: 98.4163% ( 2) 00:12:20.250 14317.905 - 14423.184: 98.4375% ( 3) 00:12:20.250 14423.184 - 14528.463: 98.4658% ( 4) 00:12:20.250 14528.463 - 14633.741: 98.5223% ( 8) 00:12:20.250 14633.741 - 14739.020: 98.5577% ( 5) 00:12:20.250 14739.020 - 14844.299: 98.6143% ( 8) 00:12:20.250 14844.299 - 14949.578: 98.6496% ( 5) 00:12:20.250 14949.578 - 15054.856: 98.6850% ( 5) 00:12:20.250 15054.856 - 15160.135: 98.7415% ( 8) 00:12:20.250 15160.135 - 15265.414: 98.7627% ( 3) 00:12:20.250 15265.414 - 15370.692: 98.8193% ( 8) 00:12:20.250 15370.692 - 15475.971: 98.8546% ( 5) 00:12:20.250 15475.971 - 15581.250: 98.8900% ( 5) 00:12:20.250 15581.250 - 15686.529: 98.9183% ( 4) 00:12:20.250 15686.529 - 15791.807: 98.9395% ( 3) 00:12:20.250 15791.807 - 15897.086: 98.9678% ( 4) 00:12:20.250 15897.086 - 16002.365: 99.0031% ( 5) 00:12:20.250 16002.365 - 16107.643: 99.0385% ( 5) 00:12:20.250 16107.643 - 16212.922: 99.0667% ( 4) 00:12:20.250 16212.922 - 16318.201: 99.0950% ( 4) 00:12:20.250 42322.043 - 42532.601: 99.1304% ( 5) 00:12:20.250 42532.601 - 42743.158: 99.1728% ( 6) 00:12:20.250 42743.158 - 42953.716: 99.2152% ( 6) 00:12:20.250 42953.716 - 43164.273: 99.2647% ( 7) 00:12:20.250 43164.273 - 43374.831: 99.3071% ( 6) 00:12:20.250 43374.831 - 43585.388: 99.3425% ( 5) 00:12:20.250 43585.388 - 43795.945: 99.3920% ( 7) 00:12:20.250 43795.945 - 44006.503: 99.4344% ( 6) 00:12:20.250 44006.503 - 44217.060: 99.4839% ( 7) 00:12:20.250 44217.060 - 44427.618: 99.5334% ( 7) 00:12:20.250 44427.618 - 44638.175: 99.5475% ( 2) 00:12:20.250 48849.324 - 49059.881: 99.5687% ( 3) 00:12:20.250 49059.881 - 49270.439: 99.6182% ( 7) 00:12:20.250 49270.439 - 49480.996: 99.6606% ( 6) 00:12:20.250 49480.996 - 49691.553: 99.7101% ( 7) 00:12:20.250 49691.553 - 49902.111: 99.7525% ( 6) 00:12:20.250 49902.111 - 50112.668: 99.8020% ( 7) 00:12:20.250 50112.668 - 50323.226: 99.8445% ( 6) 00:12:20.250 50323.226 - 50533.783: 99.8939% ( 7) 00:12:20.250 50533.783 - 50744.341: 99.9434% ( 7) 00:12:20.250 50744.341 - 50954.898: 99.9859% ( 6) 00:12:20.250 50954.898 - 51165.455: 100.0000% ( 2) 00:12:20.250 00:12:20.250 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:20.250 ============================================================================== 00:12:20.250 Range in us Cumulative IO count 00:12:20.250 7106.313 - 7158.953: 0.0141% ( 2) 00:12:20.250 7158.953 - 7211.592: 0.0424% ( 4) 00:12:20.250 7211.592 - 7264.231: 0.1202% ( 11) 00:12:20.250 7264.231 - 7316.871: 0.2404% ( 17) 00:12:20.250 7316.871 - 7369.510: 0.3959% ( 22) 00:12:20.250 7369.510 - 7422.149: 0.5798% ( 26) 00:12:20.250 7422.149 - 7474.789: 0.7777% ( 28) 00:12:20.250 7474.789 - 7527.428: 1.0959% ( 45) 00:12:20.250 7527.428 - 7580.067: 1.4494% ( 50) 00:12:20.250 7580.067 - 7632.707: 1.7887% ( 48) 00:12:20.250 7632.707 - 7685.346: 2.1988% ( 58) 00:12:20.250 7685.346 - 7737.986: 2.6160% ( 59) 00:12:20.250 7737.986 - 7790.625: 3.0755% ( 65) 00:12:20.251 7790.625 - 7843.264: 3.5209% ( 63) 00:12:20.251 7843.264 - 7895.904: 4.1219% ( 85) 00:12:20.251 7895.904 - 7948.543: 4.7865% ( 94) 00:12:20.251 7948.543 - 8001.182: 5.7480% ( 136) 00:12:20.251 8001.182 - 8053.822: 6.9146% ( 165) 00:12:20.251 8053.822 - 8106.461: 8.5619% ( 233) 00:12:20.251 8106.461 - 8159.100: 10.7961% ( 316) 00:12:20.251 8159.100 - 8211.740: 13.4757% ( 379) 00:12:20.251 8211.740 - 8264.379: 16.7279% ( 460) 00:12:20.251 8264.379 - 8317.018: 20.2913% ( 504) 00:12:20.251 8317.018 - 8369.658: 24.5687% ( 605) 00:12:20.251 8369.658 - 8422.297: 29.0512% ( 634) 00:12:20.251 8422.297 - 8474.937: 33.4771% ( 626) 00:12:20.251 8474.937 - 8527.576: 38.1080% ( 655) 00:12:20.251 8527.576 - 8580.215: 42.7107% ( 651) 00:12:20.251 8580.215 - 8632.855: 47.2992% ( 649) 00:12:20.251 8632.855 - 8685.494: 51.7463% ( 629) 00:12:20.251 8685.494 - 8738.133: 56.2854% ( 642) 00:12:20.251 8738.133 - 8790.773: 60.7820% ( 636) 00:12:20.251 8790.773 - 8843.412: 65.4129% ( 655) 00:12:20.251 8843.412 - 8896.051: 69.9166% ( 637) 00:12:20.251 8896.051 - 8948.691: 74.2435% ( 612) 00:12:20.251 8948.691 - 9001.330: 78.4785% ( 599) 00:12:20.251 9001.330 - 9053.969: 82.1903% ( 525) 00:12:20.251 9053.969 - 9106.609: 85.2729% ( 436) 00:12:20.251 9106.609 - 9159.248: 87.7828% ( 355) 00:12:20.251 9159.248 - 9211.888: 89.5928% ( 256) 00:12:20.251 9211.888 - 9264.527: 90.9856% ( 197) 00:12:20.251 9264.527 - 9317.166: 92.1946% ( 171) 00:12:20.251 9317.166 - 9369.806: 93.1208% ( 131) 00:12:20.251 9369.806 - 9422.445: 93.8843% ( 108) 00:12:20.251 9422.445 - 9475.084: 94.4499% ( 80) 00:12:20.251 9475.084 - 9527.724: 94.8883% ( 62) 00:12:20.251 9527.724 - 9580.363: 95.2418% ( 50) 00:12:20.251 9580.363 - 9633.002: 95.5741% ( 47) 00:12:20.251 9633.002 - 9685.642: 95.8286% ( 36) 00:12:20.251 9685.642 - 9738.281: 96.0337% ( 29) 00:12:20.251 9738.281 - 9790.920: 96.1821% ( 21) 00:12:20.251 9790.920 - 9843.560: 96.2740% ( 13) 00:12:20.251 9843.560 - 9896.199: 96.3165% ( 6) 00:12:20.251 9896.199 - 9948.839: 96.3872% ( 10) 00:12:20.251 9948.839 - 10001.478: 96.4367% ( 7) 00:12:20.251 10001.478 - 10054.117: 96.5003% ( 9) 00:12:20.251 10054.117 - 10106.757: 96.5568% ( 8) 00:12:20.251 10106.757 - 10159.396: 96.6063% ( 7) 00:12:20.251 10159.396 - 10212.035: 96.6700% ( 9) 00:12:20.251 10212.035 - 10264.675: 96.7195% ( 7) 00:12:20.251 10264.675 - 10317.314: 96.7902% ( 10) 00:12:20.251 10317.314 - 10369.953: 96.8679% ( 11) 00:12:20.251 10369.953 - 10422.593: 96.9033% ( 5) 00:12:20.251 10422.593 - 10475.232: 96.9174% ( 2) 00:12:20.251 10475.232 - 10527.871: 96.9316% ( 2) 00:12:20.251 10527.871 - 10580.511: 96.9528% ( 3) 00:12:20.251 10580.511 - 10633.150: 96.9598% ( 1) 00:12:20.251 10633.150 - 10685.790: 96.9811% ( 3) 00:12:20.251 10685.790 - 10738.429: 96.9952% ( 2) 00:12:20.251 10738.429 - 10791.068: 97.0164% ( 3) 00:12:20.251 10791.068 - 10843.708: 97.0305% ( 2) 00:12:20.251 10843.708 - 10896.347: 97.0447% ( 2) 00:12:20.251 10896.347 - 10948.986: 97.0588% ( 2) 00:12:20.251 10948.986 - 11001.626: 97.0730% ( 2) 00:12:20.251 11001.626 - 11054.265: 97.0942% ( 3) 00:12:20.251 11054.265 - 11106.904: 97.1083% ( 2) 00:12:20.251 11106.904 - 11159.544: 97.1366% ( 4) 00:12:20.251 11159.544 - 11212.183: 97.1649% ( 4) 00:12:20.251 11212.183 - 11264.822: 97.2073% ( 6) 00:12:20.251 11264.822 - 11317.462: 97.2426% ( 5) 00:12:20.251 11317.462 - 11370.101: 97.2709% ( 4) 00:12:20.251 11370.101 - 11422.741: 97.2992% ( 4) 00:12:20.251 11422.741 - 11475.380: 97.3558% ( 8) 00:12:20.251 11475.380 - 11528.019: 97.4053% ( 7) 00:12:20.251 11528.019 - 11580.659: 97.4406% ( 5) 00:12:20.251 11580.659 - 11633.298: 97.4972% ( 8) 00:12:20.251 11633.298 - 11685.937: 97.5396% ( 6) 00:12:20.251 11685.937 - 11738.577: 97.5679% ( 4) 00:12:20.251 11738.577 - 11791.216: 97.5962% ( 4) 00:12:20.251 11791.216 - 11843.855: 97.6174% ( 3) 00:12:20.251 11843.855 - 11896.495: 97.6598% ( 6) 00:12:20.251 11896.495 - 11949.134: 97.6951% ( 5) 00:12:20.251 11949.134 - 12001.773: 97.7163% ( 3) 00:12:20.251 12001.773 - 12054.413: 97.7446% ( 4) 00:12:20.251 12054.413 - 12107.052: 97.7729% ( 4) 00:12:20.251 12107.052 - 12159.692: 97.7800% ( 1) 00:12:20.251 12159.692 - 12212.331: 97.8083% ( 4) 00:12:20.251 12212.331 - 12264.970: 97.8295% ( 3) 00:12:20.251 12264.970 - 12317.610: 97.8648% ( 5) 00:12:20.251 12317.610 - 12370.249: 97.8860% ( 3) 00:12:20.251 12370.249 - 12422.888: 97.9143% ( 4) 00:12:20.251 12422.888 - 12475.528: 97.9426% ( 4) 00:12:20.251 12475.528 - 12528.167: 97.9709% ( 4) 00:12:20.251 12528.167 - 12580.806: 97.9921% ( 3) 00:12:20.251 12580.806 - 12633.446: 98.0204% ( 4) 00:12:20.251 12633.446 - 12686.085: 98.0486% ( 4) 00:12:20.251 12686.085 - 12738.724: 98.0699% ( 3) 00:12:20.251 12738.724 - 12791.364: 98.1052% ( 5) 00:12:20.251 12791.364 - 12844.003: 98.1264% ( 3) 00:12:20.251 12844.003 - 12896.643: 98.1547% ( 4) 00:12:20.251 12896.643 - 12949.282: 98.1830% ( 4) 00:12:20.251 12949.282 - 13001.921: 98.2042% ( 3) 00:12:20.251 13001.921 - 13054.561: 98.2254% ( 3) 00:12:20.251 13054.561 - 13107.200: 98.2537% ( 4) 00:12:20.251 13107.200 - 13159.839: 98.2749% ( 3) 00:12:20.251 13159.839 - 13212.479: 98.3032% ( 4) 00:12:20.251 13212.479 - 13265.118: 98.3173% ( 2) 00:12:20.251 13265.118 - 13317.757: 98.3244% ( 1) 00:12:20.251 13317.757 - 13370.397: 98.3385% ( 2) 00:12:20.251 13370.397 - 13423.036: 98.3456% ( 1) 00:12:20.251 13423.036 - 13475.676: 98.3597% ( 2) 00:12:20.251 13475.676 - 13580.954: 98.3809% ( 3) 00:12:20.251 13580.954 - 13686.233: 98.4092% ( 4) 00:12:20.251 13686.233 - 13791.512: 98.4304% ( 3) 00:12:20.251 13791.512 - 13896.790: 98.4516% ( 3) 00:12:20.251 13896.790 - 14002.069: 98.4729% ( 3) 00:12:20.251 14002.069 - 14107.348: 98.4870% ( 2) 00:12:20.251 14107.348 - 14212.627: 98.5082% ( 3) 00:12:20.251 14212.627 - 14317.905: 98.5223% ( 2) 00:12:20.251 14317.905 - 14423.184: 98.5436% ( 3) 00:12:20.251 14423.184 - 14528.463: 98.5718% ( 4) 00:12:20.251 14528.463 - 14633.741: 98.5930% ( 3) 00:12:20.251 14633.741 - 14739.020: 98.6143% ( 3) 00:12:20.251 14739.020 - 14844.299: 98.6355% ( 3) 00:12:20.251 14844.299 - 14949.578: 98.6425% ( 1) 00:12:20.251 15160.135 - 15265.414: 98.6567% ( 2) 00:12:20.251 15265.414 - 15370.692: 98.7062% ( 7) 00:12:20.251 15370.692 - 15475.971: 98.7344% ( 4) 00:12:20.251 15475.971 - 15581.250: 98.7839% ( 7) 00:12:20.251 15581.250 - 15686.529: 98.8193% ( 5) 00:12:20.251 15686.529 - 15791.807: 98.8546% ( 5) 00:12:20.251 15791.807 - 15897.086: 98.8900% ( 5) 00:12:20.251 15897.086 - 16002.365: 98.9253% ( 5) 00:12:20.251 16002.365 - 16107.643: 98.9607% ( 5) 00:12:20.251 16107.643 - 16212.922: 99.0031% ( 6) 00:12:20.251 16212.922 - 16318.201: 99.0385% ( 5) 00:12:20.251 16318.201 - 16423.480: 99.0738% ( 5) 00:12:20.251 16423.480 - 16528.758: 99.0950% ( 3) 00:12:20.251 40005.912 - 40216.469: 99.1021% ( 1) 00:12:20.251 40216.469 - 40427.027: 99.1516% ( 7) 00:12:20.251 40427.027 - 40637.584: 99.1940% ( 6) 00:12:20.251 40637.584 - 40848.141: 99.2435% ( 7) 00:12:20.251 40848.141 - 41058.699: 99.2930% ( 7) 00:12:20.251 41058.699 - 41269.256: 99.3354% ( 6) 00:12:20.251 41269.256 - 41479.814: 99.3849% ( 7) 00:12:20.251 41479.814 - 41690.371: 99.4273% ( 6) 00:12:20.251 41690.371 - 41900.929: 99.4768% ( 7) 00:12:20.251 41900.929 - 42111.486: 99.5263% ( 7) 00:12:20.251 42111.486 - 42322.043: 99.5475% ( 3) 00:12:20.251 46743.749 - 46954.307: 99.5899% ( 6) 00:12:20.251 46954.307 - 47164.864: 99.6394% ( 7) 00:12:20.251 47164.864 - 47375.422: 99.6889% ( 7) 00:12:20.251 47375.422 - 47585.979: 99.7384% ( 7) 00:12:20.251 47585.979 - 47796.537: 99.7879% ( 7) 00:12:20.251 47796.537 - 48007.094: 99.8374% ( 7) 00:12:20.251 48007.094 - 48217.651: 99.8798% ( 6) 00:12:20.251 48217.651 - 48428.209: 99.9293% ( 7) 00:12:20.251 48428.209 - 48638.766: 99.9859% ( 8) 00:12:20.251 48638.766 - 48849.324: 100.0000% ( 2) 00:12:20.251 00:12:20.251 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:20.251 ============================================================================== 00:12:20.251 Range in us Cumulative IO count 00:12:20.251 4000.591 - 4026.911: 0.0071% ( 1) 00:12:20.251 7369.510 - 7422.149: 0.0141% ( 1) 00:12:20.251 7422.149 - 7474.789: 0.1131% ( 14) 00:12:20.251 7474.789 - 7527.428: 0.1909% ( 11) 00:12:20.251 7527.428 - 7580.067: 0.3747% ( 26) 00:12:20.251 7580.067 - 7632.707: 0.6999% ( 46) 00:12:20.251 7632.707 - 7685.346: 1.1524% ( 64) 00:12:20.251 7685.346 - 7737.986: 1.8382% ( 97) 00:12:20.251 7737.986 - 7790.625: 2.4392% ( 85) 00:12:20.251 7790.625 - 7843.264: 3.1321% ( 98) 00:12:20.251 7843.264 - 7895.904: 3.9946% ( 122) 00:12:20.251 7895.904 - 7948.543: 5.1188% ( 159) 00:12:20.251 7948.543 - 8001.182: 6.2005% ( 153) 00:12:20.251 8001.182 - 8053.822: 7.3954% ( 169) 00:12:20.251 8053.822 - 8106.461: 9.2265% ( 259) 00:12:20.251 8106.461 - 8159.100: 11.5173% ( 324) 00:12:20.251 8159.100 - 8211.740: 14.1191% ( 368) 00:12:20.251 8211.740 - 8264.379: 17.3996% ( 464) 00:12:20.251 8264.379 - 8317.018: 21.0761% ( 520) 00:12:20.251 8317.018 - 8369.658: 24.9293% ( 545) 00:12:20.251 8369.658 - 8422.297: 29.0017% ( 576) 00:12:20.251 8422.297 - 8474.937: 33.6044% ( 651) 00:12:20.251 8474.937 - 8527.576: 38.1575% ( 644) 00:12:20.251 8527.576 - 8580.215: 42.5976% ( 628) 00:12:20.251 8580.215 - 8632.855: 47.2780% ( 662) 00:12:20.251 8632.855 - 8685.494: 51.9796% ( 665) 00:12:20.251 8685.494 - 8738.133: 56.6106% ( 655) 00:12:20.251 8738.133 - 8790.773: 61.0506% ( 628) 00:12:20.251 8790.773 - 8843.412: 65.5755% ( 640) 00:12:20.252 8843.412 - 8896.051: 70.0933% ( 639) 00:12:20.252 8896.051 - 8948.691: 74.3990% ( 609) 00:12:20.252 8948.691 - 9001.330: 78.5421% ( 586) 00:12:20.252 9001.330 - 9053.969: 82.1126% ( 505) 00:12:20.252 9053.969 - 9106.609: 85.2800% ( 448) 00:12:20.252 9106.609 - 9159.248: 87.8676% ( 366) 00:12:20.252 9159.248 - 9211.888: 89.8826% ( 285) 00:12:20.252 9211.888 - 9264.527: 91.4593% ( 223) 00:12:20.252 9264.527 - 9317.166: 92.6258% ( 165) 00:12:20.252 9317.166 - 9369.806: 93.5238% ( 127) 00:12:20.252 9369.806 - 9422.445: 94.2449% ( 102) 00:12:20.252 9422.445 - 9475.084: 94.7893% ( 77) 00:12:20.252 9475.084 - 9527.724: 95.2418% ( 64) 00:12:20.252 9527.724 - 9580.363: 95.5600% ( 45) 00:12:20.252 9580.363 - 9633.002: 95.7579% ( 28) 00:12:20.252 9633.002 - 9685.642: 95.9842% ( 32) 00:12:20.252 9685.642 - 9738.281: 96.1185% ( 19) 00:12:20.252 9738.281 - 9790.920: 96.2528% ( 19) 00:12:20.252 9790.920 - 9843.560: 96.3589% ( 15) 00:12:20.252 9843.560 - 9896.199: 96.4367% ( 11) 00:12:20.252 9896.199 - 9948.839: 96.5003% ( 9) 00:12:20.252 9948.839 - 10001.478: 96.5639% ( 9) 00:12:20.252 10001.478 - 10054.117: 96.6346% ( 10) 00:12:20.252 10054.117 - 10106.757: 96.7195% ( 12) 00:12:20.252 10106.757 - 10159.396: 96.7902% ( 10) 00:12:20.252 10159.396 - 10212.035: 96.8396% ( 7) 00:12:20.252 10212.035 - 10264.675: 96.8750% ( 5) 00:12:20.252 10264.675 - 10317.314: 96.9104% ( 5) 00:12:20.252 10317.314 - 10369.953: 96.9457% ( 5) 00:12:20.252 10369.953 - 10422.593: 96.9528% ( 1) 00:12:20.252 10422.593 - 10475.232: 96.9669% ( 2) 00:12:20.252 10475.232 - 10527.871: 96.9811% ( 2) 00:12:20.252 10527.871 - 10580.511: 97.0023% ( 3) 00:12:20.252 10580.511 - 10633.150: 97.0235% ( 3) 00:12:20.252 10633.150 - 10685.790: 97.0376% ( 2) 00:12:20.252 10685.790 - 10738.429: 97.0588% ( 3) 00:12:20.252 10738.429 - 10791.068: 97.0730% ( 2) 00:12:20.252 10791.068 - 10843.708: 97.0942% ( 3) 00:12:20.252 10843.708 - 10896.347: 97.1083% ( 2) 00:12:20.252 10896.347 - 10948.986: 97.1225% ( 2) 00:12:20.252 10948.986 - 11001.626: 97.1437% ( 3) 00:12:20.252 11001.626 - 11054.265: 97.1578% ( 2) 00:12:20.252 11054.265 - 11106.904: 97.1861% ( 4) 00:12:20.252 11106.904 - 11159.544: 97.2285% ( 6) 00:12:20.252 11159.544 - 11212.183: 97.2568% ( 4) 00:12:20.252 11212.183 - 11264.822: 97.2851% ( 4) 00:12:20.252 11264.822 - 11317.462: 97.3275% ( 6) 00:12:20.252 11317.462 - 11370.101: 97.3558% ( 4) 00:12:20.252 11370.101 - 11422.741: 97.3911% ( 5) 00:12:20.252 11422.741 - 11475.380: 97.4053% ( 2) 00:12:20.252 11475.380 - 11528.019: 97.4194% ( 2) 00:12:20.252 11528.019 - 11580.659: 97.4406% ( 3) 00:12:20.252 11580.659 - 11633.298: 97.4618% ( 3) 00:12:20.252 11633.298 - 11685.937: 97.4830% ( 3) 00:12:20.252 11685.937 - 11738.577: 97.5042% ( 3) 00:12:20.252 11738.577 - 11791.216: 97.5325% ( 4) 00:12:20.252 11791.216 - 11843.855: 97.5679% ( 5) 00:12:20.252 11843.855 - 11896.495: 97.5891% ( 3) 00:12:20.252 11896.495 - 11949.134: 97.6103% ( 3) 00:12:20.252 11949.134 - 12001.773: 97.6456% ( 5) 00:12:20.252 12001.773 - 12054.413: 97.6810% ( 5) 00:12:20.252 12054.413 - 12107.052: 97.7022% ( 3) 00:12:20.252 12107.052 - 12159.692: 97.7305% ( 4) 00:12:20.252 12159.692 - 12212.331: 97.7658% ( 5) 00:12:20.252 12212.331 - 12264.970: 97.7941% ( 4) 00:12:20.252 12264.970 - 12317.610: 97.8224% ( 4) 00:12:20.252 12317.610 - 12370.249: 97.8507% ( 4) 00:12:20.252 12370.249 - 12422.888: 97.8860% ( 5) 00:12:20.252 12422.888 - 12475.528: 97.9072% ( 3) 00:12:20.252 12475.528 - 12528.167: 97.9355% ( 4) 00:12:20.252 12528.167 - 12580.806: 97.9497% ( 2) 00:12:20.252 12580.806 - 12633.446: 97.9850% ( 5) 00:12:20.252 12633.446 - 12686.085: 98.0204% ( 5) 00:12:20.252 12686.085 - 12738.724: 98.0345% ( 2) 00:12:20.252 12738.724 - 12791.364: 98.0628% ( 4) 00:12:20.252 12791.364 - 12844.003: 98.0840% ( 3) 00:12:20.252 12844.003 - 12896.643: 98.1123% ( 4) 00:12:20.252 12896.643 - 12949.282: 98.1406% ( 4) 00:12:20.252 12949.282 - 13001.921: 98.1618% ( 3) 00:12:20.252 13001.921 - 13054.561: 98.1900% ( 4) 00:12:20.252 13054.561 - 13107.200: 98.2113% ( 3) 00:12:20.252 13107.200 - 13159.839: 98.2395% ( 4) 00:12:20.252 13159.839 - 13212.479: 98.2678% ( 4) 00:12:20.252 13212.479 - 13265.118: 98.2890% ( 3) 00:12:20.252 13265.118 - 13317.757: 98.3102% ( 3) 00:12:20.252 13317.757 - 13370.397: 98.3385% ( 4) 00:12:20.252 13370.397 - 13423.036: 98.3668% ( 4) 00:12:20.252 13423.036 - 13475.676: 98.3880% ( 3) 00:12:20.252 13475.676 - 13580.954: 98.4163% ( 4) 00:12:20.252 13580.954 - 13686.233: 98.4375% ( 3) 00:12:20.252 13686.233 - 13791.512: 98.4658% ( 4) 00:12:20.252 13791.512 - 13896.790: 98.4870% ( 3) 00:12:20.252 13896.790 - 14002.069: 98.5082% ( 3) 00:12:20.252 14002.069 - 14107.348: 98.5365% ( 4) 00:12:20.252 14107.348 - 14212.627: 98.5577% ( 3) 00:12:20.252 14212.627 - 14317.905: 98.5789% ( 3) 00:12:20.252 14317.905 - 14423.184: 98.6001% ( 3) 00:12:20.252 14423.184 - 14528.463: 98.6143% ( 2) 00:12:20.252 14528.463 - 14633.741: 98.6355% ( 3) 00:12:20.252 14633.741 - 14739.020: 98.6425% ( 1) 00:12:20.252 15475.971 - 15581.250: 98.6779% ( 5) 00:12:20.252 15581.250 - 15686.529: 98.7132% ( 5) 00:12:20.252 15686.529 - 15791.807: 98.7486% ( 5) 00:12:20.252 15791.807 - 15897.086: 98.7910% ( 6) 00:12:20.252 15897.086 - 16002.365: 98.8264% ( 5) 00:12:20.252 16002.365 - 16107.643: 98.8617% ( 5) 00:12:20.252 16107.643 - 16212.922: 98.8971% ( 5) 00:12:20.252 16212.922 - 16318.201: 98.9395% ( 6) 00:12:20.252 16318.201 - 16423.480: 98.9678% ( 4) 00:12:20.252 16423.480 - 16528.758: 99.0102% ( 6) 00:12:20.252 16528.758 - 16634.037: 99.0455% ( 5) 00:12:20.252 16634.037 - 16739.316: 99.0880% ( 6) 00:12:20.252 16739.316 - 16844.594: 99.0950% ( 1) 00:12:20.252 38321.452 - 38532.010: 99.1374% ( 6) 00:12:20.252 38532.010 - 38742.567: 99.1869% ( 7) 00:12:20.252 38742.567 - 38953.124: 99.2364% ( 7) 00:12:20.252 38953.124 - 39163.682: 99.2859% ( 7) 00:12:20.252 39163.682 - 39374.239: 99.3283% ( 6) 00:12:20.252 39374.239 - 39584.797: 99.3778% ( 7) 00:12:20.252 39584.797 - 39795.354: 99.4202% ( 6) 00:12:20.252 39795.354 - 40005.912: 99.4697% ( 7) 00:12:20.252 40005.912 - 40216.469: 99.5192% ( 7) 00:12:20.252 40216.469 - 40427.027: 99.5475% ( 4) 00:12:20.252 44848.733 - 45059.290: 99.5970% ( 7) 00:12:20.252 45059.290 - 45269.847: 99.6465% ( 7) 00:12:20.252 45269.847 - 45480.405: 99.6889% ( 6) 00:12:20.252 45480.405 - 45690.962: 99.7384% ( 7) 00:12:20.252 45690.962 - 45901.520: 99.7879% ( 7) 00:12:20.252 45901.520 - 46112.077: 99.8303% ( 6) 00:12:20.252 46112.077 - 46322.635: 99.8727% ( 6) 00:12:20.252 46322.635 - 46533.192: 99.9222% ( 7) 00:12:20.252 46533.192 - 46743.749: 99.9646% ( 6) 00:12:20.252 46743.749 - 46954.307: 100.0000% ( 5) 00:12:20.252 00:12:20.252 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:20.252 ============================================================================== 00:12:20.252 Range in us Cumulative IO count 00:12:20.252 7106.313 - 7158.953: 0.0212% ( 3) 00:12:20.252 7158.953 - 7211.592: 0.0424% ( 3) 00:12:20.252 7211.592 - 7264.231: 0.0919% ( 7) 00:12:20.252 7264.231 - 7316.871: 0.1838% ( 13) 00:12:20.252 7316.871 - 7369.510: 0.2757% ( 13) 00:12:20.252 7369.510 - 7422.149: 0.4242% ( 21) 00:12:20.252 7422.149 - 7474.789: 0.6080% ( 26) 00:12:20.252 7474.789 - 7527.428: 0.8413% ( 33) 00:12:20.252 7527.428 - 7580.067: 1.1878% ( 49) 00:12:20.252 7580.067 - 7632.707: 1.5696% ( 54) 00:12:20.252 7632.707 - 7685.346: 1.9726% ( 57) 00:12:20.252 7685.346 - 7737.986: 2.4038% ( 61) 00:12:20.252 7737.986 - 7790.625: 2.9058% ( 71) 00:12:20.252 7790.625 - 7843.264: 3.4502% ( 77) 00:12:20.252 7843.264 - 7895.904: 4.0865% ( 90) 00:12:20.252 7895.904 - 7948.543: 4.8925% ( 114) 00:12:20.252 7948.543 - 8001.182: 5.7763% ( 125) 00:12:20.252 8001.182 - 8053.822: 7.0136% ( 175) 00:12:20.252 8053.822 - 8106.461: 8.6326% ( 229) 00:12:20.252 8106.461 - 8159.100: 10.6476% ( 285) 00:12:20.252 8159.100 - 8211.740: 13.2706% ( 371) 00:12:20.252 8211.740 - 8264.379: 16.6502% ( 478) 00:12:20.252 8264.379 - 8317.018: 20.3125% ( 518) 00:12:20.252 8317.018 - 8369.658: 24.4839% ( 590) 00:12:20.252 8369.658 - 8422.297: 29.0017% ( 639) 00:12:20.252 8422.297 - 8474.937: 33.5761% ( 647) 00:12:20.252 8474.937 - 8527.576: 38.0798% ( 637) 00:12:20.252 8527.576 - 8580.215: 42.6046% ( 640) 00:12:20.252 8580.215 - 8632.855: 47.0942% ( 635) 00:12:20.252 8632.855 - 8685.494: 51.7110% ( 653) 00:12:20.252 8685.494 - 8738.133: 56.4197% ( 666) 00:12:20.252 8738.133 - 8790.773: 60.9799% ( 645) 00:12:20.252 8790.773 - 8843.412: 65.5472% ( 646) 00:12:20.252 8843.412 - 8896.051: 69.9661% ( 625) 00:12:20.252 8896.051 - 8948.691: 74.3001% ( 613) 00:12:20.252 8948.691 - 9001.330: 78.4432% ( 586) 00:12:20.252 9001.330 - 9053.969: 81.9358% ( 494) 00:12:20.252 9053.969 - 9106.609: 85.1174% ( 450) 00:12:20.252 9106.609 - 9159.248: 87.6555% ( 359) 00:12:20.252 9159.248 - 9211.888: 89.5715% ( 271) 00:12:20.252 9211.888 - 9264.527: 91.0846% ( 214) 00:12:20.252 9264.527 - 9317.166: 92.2865% ( 170) 00:12:20.252 9317.166 - 9369.806: 93.2197% ( 132) 00:12:20.252 9369.806 - 9422.445: 94.0116% ( 112) 00:12:20.252 9422.445 - 9475.084: 94.6196% ( 86) 00:12:20.252 9475.084 - 9527.724: 95.0650% ( 63) 00:12:20.252 9527.724 - 9580.363: 95.4256% ( 51) 00:12:20.252 9580.363 - 9633.002: 95.6660% ( 34) 00:12:20.252 9633.002 - 9685.642: 95.8640% ( 28) 00:12:20.252 9685.642 - 9738.281: 96.0195% ( 22) 00:12:20.252 9738.281 - 9790.920: 96.1680% ( 21) 00:12:20.252 9790.920 - 9843.560: 96.3023% ( 19) 00:12:20.252 9843.560 - 9896.199: 96.3942% ( 13) 00:12:20.253 9896.199 - 9948.839: 96.4861% ( 13) 00:12:20.253 9948.839 - 10001.478: 96.5710% ( 12) 00:12:20.253 10001.478 - 10054.117: 96.6629% ( 13) 00:12:20.253 10054.117 - 10106.757: 96.7336% ( 10) 00:12:20.253 10106.757 - 10159.396: 96.7760% ( 6) 00:12:20.253 10159.396 - 10212.035: 96.8114% ( 5) 00:12:20.253 10212.035 - 10264.675: 96.8396% ( 4) 00:12:20.253 10264.675 - 10317.314: 96.8679% ( 4) 00:12:20.253 10317.314 - 10369.953: 96.8962% ( 4) 00:12:20.253 10369.953 - 10422.593: 96.9316% ( 5) 00:12:20.253 10422.593 - 10475.232: 96.9669% ( 5) 00:12:20.253 10475.232 - 10527.871: 96.9952% ( 4) 00:12:20.253 10527.871 - 10580.511: 97.0305% ( 5) 00:12:20.253 10580.511 - 10633.150: 97.0659% ( 5) 00:12:20.253 10633.150 - 10685.790: 97.0942% ( 4) 00:12:20.253 10685.790 - 10738.429: 97.1225% ( 4) 00:12:20.253 10738.429 - 10791.068: 97.1507% ( 4) 00:12:20.253 10791.068 - 10843.708: 97.1719% ( 3) 00:12:20.253 10843.708 - 10896.347: 97.1861% ( 2) 00:12:20.253 10896.347 - 10948.986: 97.2073% ( 3) 00:12:20.253 10948.986 - 11001.626: 97.2214% ( 2) 00:12:20.253 11001.626 - 11054.265: 97.2285% ( 1) 00:12:20.253 11054.265 - 11106.904: 97.2497% ( 3) 00:12:20.253 11106.904 - 11159.544: 97.2639% ( 2) 00:12:20.253 11159.544 - 11212.183: 97.2851% ( 3) 00:12:20.253 11422.741 - 11475.380: 97.2992% ( 2) 00:12:20.253 11475.380 - 11528.019: 97.3204% ( 3) 00:12:20.253 11528.019 - 11580.659: 97.3346% ( 2) 00:12:20.253 11580.659 - 11633.298: 97.3487% ( 2) 00:12:20.253 11633.298 - 11685.937: 97.3628% ( 2) 00:12:20.253 11685.937 - 11738.577: 97.3770% ( 2) 00:12:20.253 11738.577 - 11791.216: 97.3911% ( 2) 00:12:20.253 11791.216 - 11843.855: 97.4265% ( 5) 00:12:20.253 11843.855 - 11896.495: 97.4406% ( 2) 00:12:20.253 11896.495 - 11949.134: 97.4548% ( 2) 00:12:20.253 11949.134 - 12001.773: 97.4689% ( 2) 00:12:20.253 12001.773 - 12054.413: 97.4901% ( 3) 00:12:20.253 12054.413 - 12107.052: 97.5255% ( 5) 00:12:20.253 12107.052 - 12159.692: 97.5467% ( 3) 00:12:20.253 12159.692 - 12212.331: 97.5820% ( 5) 00:12:20.253 12212.331 - 12264.970: 97.6032% ( 3) 00:12:20.253 12264.970 - 12317.610: 97.6315% ( 4) 00:12:20.253 12317.610 - 12370.249: 97.6527% ( 3) 00:12:20.253 12370.249 - 12422.888: 97.6951% ( 6) 00:12:20.253 12422.888 - 12475.528: 97.7163% ( 3) 00:12:20.253 12475.528 - 12528.167: 97.7446% ( 4) 00:12:20.253 12528.167 - 12580.806: 97.8012% ( 8) 00:12:20.253 12580.806 - 12633.446: 97.8648% ( 9) 00:12:20.253 12633.446 - 12686.085: 97.9002% ( 5) 00:12:20.253 12686.085 - 12738.724: 97.9355% ( 5) 00:12:20.253 12738.724 - 12791.364: 97.9709% ( 5) 00:12:20.253 12791.364 - 12844.003: 98.0133% ( 6) 00:12:20.253 12844.003 - 12896.643: 98.0416% ( 4) 00:12:20.253 12896.643 - 12949.282: 98.0557% ( 2) 00:12:20.253 12949.282 - 13001.921: 98.0840% ( 4) 00:12:20.253 13001.921 - 13054.561: 98.1052% ( 3) 00:12:20.253 13054.561 - 13107.200: 98.1335% ( 4) 00:12:20.253 13107.200 - 13159.839: 98.1547% ( 3) 00:12:20.253 13159.839 - 13212.479: 98.1830% ( 4) 00:12:20.253 13212.479 - 13265.118: 98.2113% ( 4) 00:12:20.253 13265.118 - 13317.757: 98.2254% ( 2) 00:12:20.253 13317.757 - 13370.397: 98.2607% ( 5) 00:12:20.253 13370.397 - 13423.036: 98.2890% ( 4) 00:12:20.253 13423.036 - 13475.676: 98.3032% ( 2) 00:12:20.253 13475.676 - 13580.954: 98.3527% ( 7) 00:12:20.253 13580.954 - 13686.233: 98.4021% ( 7) 00:12:20.253 13686.233 - 13791.512: 98.4516% ( 7) 00:12:20.253 13791.512 - 13896.790: 98.5011% ( 7) 00:12:20.253 13896.790 - 14002.069: 98.5506% ( 7) 00:12:20.253 14002.069 - 14107.348: 98.6001% ( 7) 00:12:20.253 14107.348 - 14212.627: 98.6355% ( 5) 00:12:20.253 14212.627 - 14317.905: 98.6425% ( 1) 00:12:20.253 14739.020 - 14844.299: 98.6708% ( 4) 00:12:20.253 14844.299 - 14949.578: 98.6920% ( 3) 00:12:20.253 14949.578 - 15054.856: 98.7062% ( 2) 00:12:20.253 15054.856 - 15160.135: 98.7274% ( 3) 00:12:20.253 15160.135 - 15265.414: 98.7557% ( 4) 00:12:20.253 15265.414 - 15370.692: 98.7769% ( 3) 00:12:20.253 15370.692 - 15475.971: 98.7981% ( 3) 00:12:20.253 15475.971 - 15581.250: 98.8264% ( 4) 00:12:20.253 15581.250 - 15686.529: 98.8476% ( 3) 00:12:20.253 15686.529 - 15791.807: 98.8758% ( 4) 00:12:20.253 15791.807 - 15897.086: 98.8971% ( 3) 00:12:20.253 15897.086 - 16002.365: 98.9183% ( 3) 00:12:20.253 16002.365 - 16107.643: 98.9465% ( 4) 00:12:20.253 16107.643 - 16212.922: 98.9678% ( 3) 00:12:20.253 16212.922 - 16318.201: 98.9960% ( 4) 00:12:20.253 16318.201 - 16423.480: 99.0173% ( 3) 00:12:20.253 16423.480 - 16528.758: 99.0385% ( 3) 00:12:20.253 16528.758 - 16634.037: 99.0597% ( 3) 00:12:20.253 16634.037 - 16739.316: 99.0809% ( 3) 00:12:20.253 16739.316 - 16844.594: 99.0950% ( 2) 00:12:20.253 36005.320 - 36215.878: 99.1162% ( 3) 00:12:20.253 36215.878 - 36426.435: 99.1657% ( 7) 00:12:20.253 36426.435 - 36636.993: 99.2152% ( 7) 00:12:20.253 36636.993 - 36847.550: 99.2576% ( 6) 00:12:20.253 36847.550 - 37058.108: 99.3071% ( 7) 00:12:20.253 37058.108 - 37268.665: 99.3566% ( 7) 00:12:20.253 37268.665 - 37479.222: 99.4061% ( 7) 00:12:20.253 37479.222 - 37689.780: 99.4485% ( 6) 00:12:20.253 37689.780 - 37900.337: 99.4910% ( 6) 00:12:20.253 37900.337 - 38110.895: 99.5475% ( 8) 00:12:20.253 42532.601 - 42743.158: 99.5758% ( 4) 00:12:20.253 42743.158 - 42953.716: 99.6253% ( 7) 00:12:20.253 42953.716 - 43164.273: 99.6748% ( 7) 00:12:20.253 43164.273 - 43374.831: 99.7172% ( 6) 00:12:20.253 43374.831 - 43585.388: 99.7667% ( 7) 00:12:20.253 43585.388 - 43795.945: 99.8162% ( 7) 00:12:20.253 43795.945 - 44006.503: 99.8657% ( 7) 00:12:20.253 44006.503 - 44217.060: 99.9152% ( 7) 00:12:20.253 44217.060 - 44427.618: 99.9646% ( 7) 00:12:20.253 44427.618 - 44638.175: 100.0000% ( 5) 00:12:20.253 00:12:20.253 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:20.253 ============================================================================== 00:12:20.253 Range in us Cumulative IO count 00:12:20.253 7106.313 - 7158.953: 0.0071% ( 1) 00:12:20.253 7158.953 - 7211.592: 0.0495% ( 6) 00:12:20.253 7211.592 - 7264.231: 0.1131% ( 9) 00:12:20.253 7264.231 - 7316.871: 0.1768% ( 9) 00:12:20.253 7316.871 - 7369.510: 0.2969% ( 17) 00:12:20.253 7369.510 - 7422.149: 0.4383% ( 20) 00:12:20.253 7422.149 - 7474.789: 0.6434% ( 29) 00:12:20.253 7474.789 - 7527.428: 0.8767% ( 33) 00:12:20.253 7527.428 - 7580.067: 1.2019% ( 46) 00:12:20.253 7580.067 - 7632.707: 1.5554% ( 50) 00:12:20.253 7632.707 - 7685.346: 1.9231% ( 52) 00:12:20.253 7685.346 - 7737.986: 2.3544% ( 61) 00:12:20.253 7737.986 - 7790.625: 2.8068% ( 64) 00:12:20.253 7790.625 - 7843.264: 3.2876% ( 68) 00:12:20.253 7843.264 - 7895.904: 3.8037% ( 73) 00:12:20.253 7895.904 - 7948.543: 4.4542% ( 92) 00:12:20.253 7948.543 - 8001.182: 5.3874% ( 132) 00:12:20.253 8001.182 - 8053.822: 6.6742% ( 182) 00:12:20.253 8053.822 - 8106.461: 8.2791% ( 227) 00:12:20.253 8106.461 - 8159.100: 10.1032% ( 258) 00:12:20.253 8159.100 - 8211.740: 13.2706% ( 448) 00:12:20.253 8211.740 - 8264.379: 16.9118% ( 515) 00:12:20.253 8264.379 - 8317.018: 21.0054% ( 579) 00:12:20.253 8317.018 - 8369.658: 25.1202% ( 582) 00:12:20.253 8369.658 - 8422.297: 29.2916% ( 590) 00:12:20.253 8422.297 - 8474.937: 33.8306% ( 642) 00:12:20.253 8474.937 - 8527.576: 38.2989% ( 632) 00:12:20.253 8527.576 - 8580.215: 42.9440% ( 657) 00:12:20.253 8580.215 - 8632.855: 47.6739% ( 669) 00:12:20.253 8632.855 - 8685.494: 52.1210% ( 629) 00:12:20.253 8685.494 - 8738.133: 56.6530% ( 641) 00:12:20.253 8738.133 - 8790.773: 61.1143% ( 631) 00:12:20.253 8790.773 - 8843.412: 65.5189% ( 623) 00:12:20.253 8843.412 - 8896.051: 69.9449% ( 626) 00:12:20.253 8896.051 - 8948.691: 74.2223% ( 605) 00:12:20.253 8948.691 - 9001.330: 78.2947% ( 576) 00:12:20.253 9001.330 - 9053.969: 82.0701% ( 534) 00:12:20.253 9053.969 - 9106.609: 85.0891% ( 427) 00:12:20.253 9106.609 - 9159.248: 87.5707% ( 351) 00:12:20.253 9159.248 - 9211.888: 89.4796% ( 270) 00:12:20.253 9211.888 - 9264.527: 90.9785% ( 212) 00:12:20.253 9264.527 - 9317.166: 92.1239% ( 162) 00:12:20.253 9317.166 - 9369.806: 93.0713% ( 134) 00:12:20.253 9369.806 - 9422.445: 93.7641% ( 98) 00:12:20.253 9422.445 - 9475.084: 94.3580% ( 84) 00:12:20.253 9475.084 - 9527.724: 94.7893% ( 61) 00:12:20.253 9527.724 - 9580.363: 95.1287% ( 48) 00:12:20.253 9580.363 - 9633.002: 95.4044% ( 39) 00:12:20.253 9633.002 - 9685.642: 95.6589% ( 36) 00:12:20.253 9685.642 - 9738.281: 95.8215% ( 23) 00:12:20.253 9738.281 - 9790.920: 95.9771% ( 22) 00:12:20.253 9790.920 - 9843.560: 96.1397% ( 23) 00:12:20.253 9843.560 - 9896.199: 96.2387% ( 14) 00:12:20.253 9896.199 - 9948.839: 96.3306% ( 13) 00:12:20.253 9948.839 - 10001.478: 96.3942% ( 9) 00:12:20.253 10001.478 - 10054.117: 96.4437% ( 7) 00:12:20.253 10054.117 - 10106.757: 96.4861% ( 6) 00:12:20.253 10106.757 - 10159.396: 96.5215% ( 5) 00:12:20.253 10159.396 - 10212.035: 96.5781% ( 8) 00:12:20.253 10212.035 - 10264.675: 96.6205% ( 6) 00:12:20.253 10264.675 - 10317.314: 96.6700% ( 7) 00:12:20.253 10317.314 - 10369.953: 96.7124% ( 6) 00:12:20.253 10369.953 - 10422.593: 96.7619% ( 7) 00:12:20.253 10422.593 - 10475.232: 96.8043% ( 6) 00:12:20.253 10475.232 - 10527.871: 96.8538% ( 7) 00:12:20.253 10527.871 - 10580.511: 96.9033% ( 7) 00:12:20.253 10580.511 - 10633.150: 96.9457% ( 6) 00:12:20.253 10633.150 - 10685.790: 96.9881% ( 6) 00:12:20.253 10685.790 - 10738.429: 97.0376% ( 7) 00:12:20.253 10738.429 - 10791.068: 97.0800% ( 6) 00:12:20.253 10791.068 - 10843.708: 97.1225% ( 6) 00:12:20.253 10843.708 - 10896.347: 97.1507% ( 4) 00:12:20.253 10896.347 - 10948.986: 97.1790% ( 4) 00:12:20.253 10948.986 - 11001.626: 97.2214% ( 6) 00:12:20.254 11001.626 - 11054.265: 97.2426% ( 3) 00:12:20.254 11054.265 - 11106.904: 97.2497% ( 1) 00:12:20.254 11106.904 - 11159.544: 97.2709% ( 3) 00:12:20.254 11159.544 - 11212.183: 97.2851% ( 2) 00:12:20.254 11580.659 - 11633.298: 97.2992% ( 2) 00:12:20.254 11633.298 - 11685.937: 97.3275% ( 4) 00:12:20.254 11685.937 - 11738.577: 97.3558% ( 4) 00:12:20.254 11738.577 - 11791.216: 97.3699% ( 2) 00:12:20.254 11791.216 - 11843.855: 97.3982% ( 4) 00:12:20.254 11843.855 - 11896.495: 97.4194% ( 3) 00:12:20.254 11896.495 - 11949.134: 97.4548% ( 5) 00:12:20.254 11949.134 - 12001.773: 97.4830% ( 4) 00:12:20.254 12001.773 - 12054.413: 97.5042% ( 3) 00:12:20.254 12054.413 - 12107.052: 97.5396% ( 5) 00:12:20.254 12107.052 - 12159.692: 97.5749% ( 5) 00:12:20.254 12159.692 - 12212.331: 97.5962% ( 3) 00:12:20.254 12212.331 - 12264.970: 97.6386% ( 6) 00:12:20.254 12264.970 - 12317.610: 97.6669% ( 4) 00:12:20.254 12317.610 - 12370.249: 97.7022% ( 5) 00:12:20.254 12370.249 - 12422.888: 97.7234% ( 3) 00:12:20.254 12422.888 - 12475.528: 97.7446% ( 3) 00:12:20.254 12475.528 - 12528.167: 97.7729% ( 4) 00:12:20.254 12528.167 - 12580.806: 97.8083% ( 5) 00:12:20.254 12580.806 - 12633.446: 97.8295% ( 3) 00:12:20.254 12633.446 - 12686.085: 97.8577% ( 4) 00:12:20.254 12686.085 - 12738.724: 97.8790% ( 3) 00:12:20.254 12738.724 - 12791.364: 97.9143% ( 5) 00:12:20.254 12791.364 - 12844.003: 97.9355% ( 3) 00:12:20.254 12844.003 - 12896.643: 97.9567% ( 3) 00:12:20.254 12896.643 - 12949.282: 97.9921% ( 5) 00:12:20.254 12949.282 - 13001.921: 98.0062% ( 2) 00:12:20.254 13001.921 - 13054.561: 98.0274% ( 3) 00:12:20.254 13054.561 - 13107.200: 98.0486% ( 3) 00:12:20.254 13107.200 - 13159.839: 98.0769% ( 4) 00:12:20.254 13159.839 - 13212.479: 98.0981% ( 3) 00:12:20.254 13212.479 - 13265.118: 98.1193% ( 3) 00:12:20.254 13265.118 - 13317.757: 98.1618% ( 6) 00:12:20.254 13317.757 - 13370.397: 98.1830% ( 3) 00:12:20.254 13370.397 - 13423.036: 98.2042% ( 3) 00:12:20.254 13423.036 - 13475.676: 98.2325% ( 4) 00:12:20.254 13475.676 - 13580.954: 98.2820% ( 7) 00:12:20.254 13580.954 - 13686.233: 98.3244% ( 6) 00:12:20.254 13686.233 - 13791.512: 98.3456% ( 3) 00:12:20.254 13791.512 - 13896.790: 98.3668% ( 3) 00:12:20.254 13896.790 - 14002.069: 98.3880% ( 3) 00:12:20.254 14002.069 - 14107.348: 98.4163% ( 4) 00:12:20.254 14107.348 - 14212.627: 98.4446% ( 4) 00:12:20.254 14212.627 - 14317.905: 98.4799% ( 5) 00:12:20.254 14317.905 - 14423.184: 98.5365% ( 8) 00:12:20.254 14423.184 - 14528.463: 98.5860% ( 7) 00:12:20.254 14528.463 - 14633.741: 98.6355% ( 7) 00:12:20.254 14633.741 - 14739.020: 98.7062% ( 10) 00:12:20.254 14739.020 - 14844.299: 98.7557% ( 7) 00:12:20.254 14844.299 - 14949.578: 98.7910% ( 5) 00:12:20.254 14949.578 - 15054.856: 98.8122% ( 3) 00:12:20.254 15054.856 - 15160.135: 98.8334% ( 3) 00:12:20.254 15160.135 - 15265.414: 98.8546% ( 3) 00:12:20.254 15265.414 - 15370.692: 98.8758% ( 3) 00:12:20.254 15370.692 - 15475.971: 98.9041% ( 4) 00:12:20.254 15475.971 - 15581.250: 98.9253% ( 3) 00:12:20.254 15581.250 - 15686.529: 98.9536% ( 4) 00:12:20.254 15686.529 - 15791.807: 98.9748% ( 3) 00:12:20.254 15791.807 - 15897.086: 98.9960% ( 3) 00:12:20.254 15897.086 - 16002.365: 99.0173% ( 3) 00:12:20.254 16002.365 - 16107.643: 99.0385% ( 3) 00:12:20.254 16107.643 - 16212.922: 99.0667% ( 4) 00:12:20.254 16212.922 - 16318.201: 99.0880% ( 3) 00:12:20.254 16318.201 - 16423.480: 99.0950% ( 1) 00:12:20.254 33478.631 - 33689.189: 99.1021% ( 1) 00:12:20.254 33689.189 - 33899.746: 99.1445% ( 6) 00:12:20.254 33899.746 - 34110.304: 99.1869% ( 6) 00:12:20.254 34110.304 - 34320.861: 99.2364% ( 7) 00:12:20.254 34320.861 - 34531.418: 99.2859% ( 7) 00:12:20.254 34531.418 - 34741.976: 99.3283% ( 6) 00:12:20.254 34741.976 - 34952.533: 99.3778% ( 7) 00:12:20.254 34952.533 - 35163.091: 99.4202% ( 6) 00:12:20.254 35163.091 - 35373.648: 99.4697% ( 7) 00:12:20.254 35373.648 - 35584.206: 99.5192% ( 7) 00:12:20.254 35584.206 - 35794.763: 99.5475% ( 4) 00:12:20.254 40005.912 - 40216.469: 99.5617% ( 2) 00:12:20.254 40216.469 - 40427.027: 99.6041% ( 6) 00:12:20.254 40427.027 - 40637.584: 99.6324% ( 4) 00:12:20.254 40637.584 - 40848.141: 99.6677% ( 5) 00:12:20.254 40848.141 - 41058.699: 99.7172% ( 7) 00:12:20.254 41058.699 - 41269.256: 99.7667% ( 7) 00:12:20.254 41269.256 - 41479.814: 99.8091% ( 6) 00:12:20.254 41479.814 - 41690.371: 99.8586% ( 7) 00:12:20.254 41690.371 - 41900.929: 99.9010% ( 6) 00:12:20.254 41900.929 - 42111.486: 99.9505% ( 7) 00:12:20.254 42111.486 - 42322.043: 100.0000% ( 7) 00:12:20.254 00:12:20.254 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:20.254 ============================================================================== 00:12:20.254 Range in us Cumulative IO count 00:12:20.254 7211.592 - 7264.231: 0.0141% ( 2) 00:12:20.254 7264.231 - 7316.871: 0.0282% ( 2) 00:12:20.254 7316.871 - 7369.510: 0.1056% ( 11) 00:12:20.254 7369.510 - 7422.149: 0.2604% ( 22) 00:12:20.254 7422.149 - 7474.789: 0.4012% ( 20) 00:12:20.254 7474.789 - 7527.428: 0.7461% ( 49) 00:12:20.254 7527.428 - 7580.067: 1.0909% ( 49) 00:12:20.254 7580.067 - 7632.707: 1.4780% ( 55) 00:12:20.254 7632.707 - 7685.346: 1.9003% ( 60) 00:12:20.254 7685.346 - 7737.986: 2.3367% ( 62) 00:12:20.254 7737.986 - 7790.625: 2.8646% ( 75) 00:12:20.254 7790.625 - 7843.264: 3.3643% ( 71) 00:12:20.254 7843.264 - 7895.904: 3.9274% ( 80) 00:12:20.254 7895.904 - 7948.543: 4.6593% ( 104) 00:12:20.254 7948.543 - 8001.182: 5.6025% ( 134) 00:12:20.254 8001.182 - 8053.822: 6.9257% ( 188) 00:12:20.254 8053.822 - 8106.461: 8.4882% ( 222) 00:12:20.254 8106.461 - 8159.100: 10.5715% ( 296) 00:12:20.254 8159.100 - 8211.740: 13.7950% ( 458) 00:12:20.254 8211.740 - 8264.379: 17.3916% ( 511) 00:12:20.254 8264.379 - 8317.018: 21.1993% ( 541) 00:12:20.254 8317.018 - 8369.658: 25.4786% ( 608) 00:12:20.254 8369.658 - 8422.297: 29.5608% ( 580) 00:12:20.254 8422.297 - 8474.937: 33.8119% ( 604) 00:12:20.254 8474.937 - 8527.576: 38.0842% ( 607) 00:12:20.254 8527.576 - 8580.215: 42.7013% ( 656) 00:12:20.254 8580.215 - 8632.855: 47.1636% ( 634) 00:12:20.254 8632.855 - 8685.494: 51.7877% ( 657) 00:12:20.254 8685.494 - 8738.133: 56.3908% ( 654) 00:12:20.254 8738.133 - 8790.773: 60.9164% ( 643) 00:12:20.254 8790.773 - 8843.412: 65.2801% ( 620) 00:12:20.254 8843.412 - 8896.051: 69.6509% ( 621) 00:12:20.254 8896.051 - 8948.691: 74.0428% ( 624) 00:12:20.254 8948.691 - 9001.330: 78.0476% ( 569) 00:12:20.254 9001.330 - 9053.969: 81.7638% ( 528) 00:12:20.254 9053.969 - 9106.609: 84.8255% ( 435) 00:12:20.254 9106.609 - 9159.248: 87.1833% ( 335) 00:12:20.254 9159.248 - 9211.888: 89.0766% ( 269) 00:12:20.254 9211.888 - 9264.527: 90.5124% ( 204) 00:12:20.254 9264.527 - 9317.166: 91.6033% ( 155) 00:12:20.254 9317.166 - 9369.806: 92.5394% ( 133) 00:12:20.254 9369.806 - 9422.445: 93.2995% ( 108) 00:12:20.254 9422.445 - 9475.084: 93.9471% ( 92) 00:12:20.254 9475.084 - 9527.724: 94.4257% ( 68) 00:12:20.254 9527.724 - 9580.363: 94.8198% ( 56) 00:12:20.254 9580.363 - 9633.002: 95.1506% ( 47) 00:12:20.254 9633.002 - 9685.642: 95.3970% ( 35) 00:12:20.254 9685.642 - 9738.281: 95.5800% ( 26) 00:12:20.254 9738.281 - 9790.920: 95.7066% ( 18) 00:12:20.254 9790.920 - 9843.560: 95.8052% ( 14) 00:12:20.254 9843.560 - 9896.199: 95.9178% ( 16) 00:12:20.254 9896.199 - 9948.839: 95.9952% ( 11) 00:12:20.254 9948.839 - 10001.478: 96.0586% ( 9) 00:12:20.254 10001.478 - 10054.117: 96.1219% ( 9) 00:12:20.254 10054.117 - 10106.757: 96.1712% ( 7) 00:12:20.254 10106.757 - 10159.396: 96.2204% ( 7) 00:12:20.254 10159.396 - 10212.035: 96.3049% ( 12) 00:12:20.254 10212.035 - 10264.675: 96.3682% ( 9) 00:12:20.254 10264.675 - 10317.314: 96.4245% ( 8) 00:12:20.254 10317.314 - 10369.953: 96.4668% ( 6) 00:12:20.254 10369.953 - 10422.593: 96.5020% ( 5) 00:12:20.254 10422.593 - 10475.232: 96.5512% ( 7) 00:12:20.254 10475.232 - 10527.871: 96.6005% ( 7) 00:12:20.254 10527.871 - 10580.511: 96.6427% ( 6) 00:12:20.254 10580.511 - 10633.150: 96.7061% ( 9) 00:12:20.254 10633.150 - 10685.790: 96.7413% ( 5) 00:12:20.254 10685.790 - 10738.429: 96.7905% ( 7) 00:12:20.254 10738.429 - 10791.068: 96.8257% ( 5) 00:12:20.255 10791.068 - 10843.708: 96.8609% ( 5) 00:12:20.255 10843.708 - 10896.347: 96.8891% ( 4) 00:12:20.255 10896.347 - 10948.986: 96.9172% ( 4) 00:12:20.255 10948.986 - 11001.626: 96.9524% ( 5) 00:12:20.255 11001.626 - 11054.265: 96.9806% ( 4) 00:12:20.255 11054.265 - 11106.904: 97.0228% ( 6) 00:12:20.255 11106.904 - 11159.544: 97.0650% ( 6) 00:12:20.255 11159.544 - 11212.183: 97.1002% ( 5) 00:12:20.255 11212.183 - 11264.822: 97.1425% ( 6) 00:12:20.255 11264.822 - 11317.462: 97.1847% ( 6) 00:12:20.255 11317.462 - 11370.101: 97.2340% ( 7) 00:12:20.255 11370.101 - 11422.741: 97.2621% ( 4) 00:12:20.255 11422.741 - 11475.380: 97.3184% ( 8) 00:12:20.255 11475.380 - 11528.019: 97.3606% ( 6) 00:12:20.255 11528.019 - 11580.659: 97.3958% ( 5) 00:12:20.255 11580.659 - 11633.298: 97.4169% ( 3) 00:12:20.255 11633.298 - 11685.937: 97.4451% ( 4) 00:12:20.255 11685.937 - 11738.577: 97.4662% ( 3) 00:12:20.255 11738.577 - 11791.216: 97.4803% ( 2) 00:12:20.255 11791.216 - 11843.855: 97.4944% ( 2) 00:12:20.255 11843.855 - 11896.495: 97.5084% ( 2) 00:12:20.255 11896.495 - 11949.134: 97.5155% ( 1) 00:12:20.255 11949.134 - 12001.773: 97.5296% ( 2) 00:12:20.255 12001.773 - 12054.413: 97.5366% ( 1) 00:12:20.255 12054.413 - 12107.052: 97.5507% ( 2) 00:12:20.255 12107.052 - 12159.692: 97.5718% ( 3) 00:12:20.255 12159.692 - 12212.331: 97.5999% ( 4) 00:12:20.255 12212.331 - 12264.970: 97.6281% ( 4) 00:12:20.255 12264.970 - 12317.610: 97.6562% ( 4) 00:12:20.255 12317.610 - 12370.249: 97.6844% ( 4) 00:12:20.255 12370.249 - 12422.888: 97.7126% ( 4) 00:12:20.255 12422.888 - 12475.528: 97.7477% ( 5) 00:12:20.255 12475.528 - 12528.167: 97.7829% ( 5) 00:12:20.255 12528.167 - 12580.806: 97.8252% ( 6) 00:12:20.255 12580.806 - 12633.446: 97.8533% ( 4) 00:12:20.255 12633.446 - 12686.085: 97.8744% ( 3) 00:12:20.255 12686.085 - 12738.724: 97.9026% ( 4) 00:12:20.255 12738.724 - 12791.364: 97.9307% ( 4) 00:12:20.255 12791.364 - 12844.003: 97.9589% ( 4) 00:12:20.255 12844.003 - 12896.643: 97.9870% ( 4) 00:12:20.255 12896.643 - 12949.282: 98.0082% ( 3) 00:12:20.255 12949.282 - 13001.921: 98.0293% ( 3) 00:12:20.255 13001.921 - 13054.561: 98.0434% ( 2) 00:12:20.255 13054.561 - 13107.200: 98.0574% ( 2) 00:12:20.255 13107.200 - 13159.839: 98.0715% ( 2) 00:12:20.255 13159.839 - 13212.479: 98.0926% ( 3) 00:12:20.255 13212.479 - 13265.118: 98.1067% ( 2) 00:12:20.255 13265.118 - 13317.757: 98.1208% ( 2) 00:12:20.255 13317.757 - 13370.397: 98.1349% ( 2) 00:12:20.255 13370.397 - 13423.036: 98.1489% ( 2) 00:12:20.255 13423.036 - 13475.676: 98.1630% ( 2) 00:12:20.255 13475.676 - 13580.954: 98.1982% ( 5) 00:12:20.255 13791.512 - 13896.790: 98.2193% ( 3) 00:12:20.255 13896.790 - 14002.069: 98.2686% ( 7) 00:12:20.255 14002.069 - 14107.348: 98.3178% ( 7) 00:12:20.255 14107.348 - 14212.627: 98.3601% ( 6) 00:12:20.255 14212.627 - 14317.905: 98.4164% ( 8) 00:12:20.255 14317.905 - 14423.184: 98.4727% ( 8) 00:12:20.255 14423.184 - 14528.463: 98.5220% ( 7) 00:12:20.255 14528.463 - 14633.741: 98.5712% ( 7) 00:12:20.255 14633.741 - 14739.020: 98.6275% ( 8) 00:12:20.255 14739.020 - 14844.299: 98.6627% ( 5) 00:12:20.255 14844.299 - 14949.578: 98.7261% ( 9) 00:12:20.255 14949.578 - 15054.856: 98.7683% ( 6) 00:12:20.255 15054.856 - 15160.135: 98.8176% ( 7) 00:12:20.255 15160.135 - 15265.414: 98.8668% ( 7) 00:12:20.255 15265.414 - 15370.692: 98.9091% ( 6) 00:12:20.255 15370.692 - 15475.971: 98.9513% ( 6) 00:12:20.255 15475.971 - 15581.250: 99.0076% ( 8) 00:12:20.255 15581.250 - 15686.529: 99.0498% ( 6) 00:12:20.255 15686.529 - 15791.807: 99.0709% ( 3) 00:12:20.255 15791.807 - 15897.086: 99.0991% ( 4) 00:12:20.255 26424.957 - 26530.236: 99.1061% ( 1) 00:12:20.255 26530.236 - 26635.515: 99.1202% ( 2) 00:12:20.255 26635.515 - 26740.794: 99.1484% ( 4) 00:12:20.255 26740.794 - 26846.072: 99.1765% ( 4) 00:12:20.255 26846.072 - 26951.351: 99.1976% ( 3) 00:12:20.255 26951.351 - 27161.908: 99.2399% ( 6) 00:12:20.255 27161.908 - 27372.466: 99.2891% ( 7) 00:12:20.255 27372.466 - 27583.023: 99.3384% ( 7) 00:12:20.255 27583.023 - 27793.581: 99.3877% ( 7) 00:12:20.255 27793.581 - 28004.138: 99.4299% ( 6) 00:12:20.255 28004.138 - 28214.696: 99.4862% ( 8) 00:12:20.255 28214.696 - 28425.253: 99.5355% ( 7) 00:12:20.255 28425.253 - 28635.810: 99.5495% ( 2) 00:12:20.255 32846.959 - 33057.516: 99.5777% ( 4) 00:12:20.255 33057.516 - 33268.074: 99.6270% ( 7) 00:12:20.255 33268.074 - 33478.631: 99.6762% ( 7) 00:12:20.255 33478.631 - 33689.189: 99.7255% ( 7) 00:12:20.255 33689.189 - 33899.746: 99.7748% ( 7) 00:12:20.255 33899.746 - 34110.304: 99.8240% ( 7) 00:12:20.255 34110.304 - 34320.861: 99.8733% ( 7) 00:12:20.255 34320.861 - 34531.418: 99.9155% ( 6) 00:12:20.255 34531.418 - 34741.976: 99.9648% ( 7) 00:12:20.255 34741.976 - 34952.533: 100.0000% ( 5) 00:12:20.255 00:12:20.255 20:30:28 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:21.633 Initializing NVMe Controllers 00:12:21.633 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:21.633 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:21.633 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.633 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:21.633 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:21.633 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:21.633 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:21.633 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:21.633 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:21.633 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:21.633 Initialization complete. Launching workers. 00:12:21.633 ======================================================== 00:12:21.633 Latency(us) 00:12:21.633 Device Information : IOPS MiB/s Average min max 00:12:21.633 PCIE (0000:00:10.0) NSID 1 from core 0: 10161.10 119.08 12629.57 8047.75 46610.95 00:12:21.633 PCIE (0000:00:11.0) NSID 1 from core 0: 10161.10 119.08 12606.65 8352.36 44924.92 00:12:21.633 PCIE (0000:00:13.0) NSID 1 from core 0: 10161.10 119.08 12583.50 8339.93 43850.08 00:12:21.633 PCIE (0000:00:12.0) NSID 1 from core 0: 10161.10 119.08 12560.87 8328.25 42326.39 00:12:21.633 PCIE (0000:00:12.0) NSID 2 from core 0: 10161.10 119.08 12538.07 8228.34 40728.29 00:12:21.633 PCIE (0000:00:12.0) NSID 3 from core 0: 10225.01 119.82 12436.64 8213.20 29929.95 00:12:21.633 ======================================================== 00:12:21.633 Total : 61030.53 715.20 12559.09 8047.75 46610.95 00:12:21.633 00:12:21.633 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:21.633 ================================================================================= 00:12:21.633 1.00000% : 8843.412us 00:12:21.633 10.00000% : 9580.363us 00:12:21.633 25.00000% : 9896.199us 00:12:21.633 50.00000% : 10843.708us 00:12:21.633 75.00000% : 15054.856us 00:12:21.633 90.00000% : 17476.267us 00:12:21.633 95.00000% : 18529.054us 00:12:21.633 98.00000% : 19581.841us 00:12:21.633 99.00000% : 35163.091us 00:12:21.633 99.50000% : 44217.060us 00:12:21.633 99.90000% : 46112.077us 00:12:21.633 99.99000% : 46533.192us 00:12:21.633 99.99900% : 46743.749us 00:12:21.633 99.99990% : 46743.749us 00:12:21.633 99.99999% : 46743.749us 00:12:21.633 00:12:21.633 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:21.633 ================================================================================= 00:12:21.633 1.00000% : 8790.773us 00:12:21.633 10.00000% : 9580.363us 00:12:21.633 25.00000% : 9896.199us 00:12:21.633 50.00000% : 10738.429us 00:12:21.633 75.00000% : 14844.299us 00:12:21.633 90.00000% : 17581.545us 00:12:21.633 95.00000% : 18634.333us 00:12:21.633 98.00000% : 19581.841us 00:12:21.633 99.00000% : 33268.074us 00:12:21.633 99.50000% : 42743.158us 00:12:21.633 99.90000% : 44638.175us 00:12:21.633 99.99000% : 45059.290us 00:12:21.633 99.99900% : 45059.290us 00:12:21.633 99.99990% : 45059.290us 00:12:21.633 99.99999% : 45059.290us 00:12:21.633 00:12:21.633 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:21.633 ================================================================================= 00:12:21.633 1.00000% : 8843.412us 00:12:21.633 10.00000% : 9580.363us 00:12:21.633 25.00000% : 9948.839us 00:12:21.633 50.00000% : 10685.790us 00:12:21.633 75.00000% : 14844.299us 00:12:21.633 90.00000% : 17476.267us 00:12:21.633 95.00000% : 18213.218us 00:12:21.633 98.00000% : 19792.398us 00:12:21.633 99.00000% : 32425.844us 00:12:21.633 99.50000% : 42322.043us 00:12:21.633 99.90000% : 43585.388us 00:12:21.633 99.99000% : 44006.503us 00:12:21.633 99.99900% : 44006.503us 00:12:21.633 99.99990% : 44006.503us 00:12:21.633 99.99999% : 44006.503us 00:12:21.633 00:12:21.633 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:21.633 ================================================================================= 00:12:21.633 1.00000% : 8685.494us 00:12:21.633 10.00000% : 9580.363us 00:12:21.633 25.00000% : 9948.839us 00:12:21.633 50.00000% : 10791.068us 00:12:21.633 75.00000% : 14633.741us 00:12:21.633 90.00000% : 17370.988us 00:12:21.633 95.00000% : 18002.660us 00:12:21.633 98.00000% : 19581.841us 00:12:21.633 99.00000% : 30320.270us 00:12:21.633 99.50000% : 40848.141us 00:12:21.633 99.90000% : 42111.486us 00:12:21.633 99.99000% : 42322.043us 00:12:21.633 99.99900% : 42532.601us 00:12:21.633 99.99990% : 42532.601us 00:12:21.633 99.99999% : 42532.601us 00:12:21.633 00:12:21.633 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:21.633 ================================================================================= 00:12:21.633 1.00000% : 8896.051us 00:12:21.633 10.00000% : 9633.002us 00:12:21.633 25.00000% : 9948.839us 00:12:21.633 50.00000% : 10791.068us 00:12:21.633 75.00000% : 14844.299us 00:12:21.633 90.00000% : 17265.709us 00:12:21.633 95.00000% : 18107.939us 00:12:21.633 98.00000% : 19476.562us 00:12:21.633 99.00000% : 28425.253us 00:12:21.633 99.50000% : 39374.239us 00:12:21.633 99.90000% : 40427.027us 00:12:21.633 99.99000% : 40848.141us 00:12:21.633 99.99900% : 40848.141us 00:12:21.633 99.99990% : 40848.141us 00:12:21.633 99.99999% : 40848.141us 00:12:21.633 00:12:21.633 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:21.633 ================================================================================= 00:12:21.633 1.00000% : 8948.691us 00:12:21.633 10.00000% : 9580.363us 00:12:21.633 25.00000% : 9896.199us 00:12:21.633 50.00000% : 10843.708us 00:12:21.633 75.00000% : 14949.578us 00:12:21.633 90.00000% : 17370.988us 00:12:21.633 95.00000% : 18213.218us 00:12:21.633 98.00000% : 19266.005us 00:12:21.633 99.00000% : 20002.956us 00:12:21.633 99.50000% : 27793.581us 00:12:21.633 99.90000% : 29478.040us 00:12:21.633 99.99000% : 29899.155us 00:12:21.633 99.99900% : 30109.712us 00:12:21.633 99.99990% : 30109.712us 00:12:21.633 99.99999% : 30109.712us 00:12:21.633 00:12:21.633 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:21.633 ============================================================================== 00:12:21.633 Range in us Cumulative IO count 00:12:21.633 8001.182 - 8053.822: 0.0098% ( 1) 00:12:21.634 8053.822 - 8106.461: 0.0393% ( 3) 00:12:21.634 8106.461 - 8159.100: 0.0688% ( 3) 00:12:21.634 8159.100 - 8211.740: 0.1081% ( 4) 00:12:21.634 8211.740 - 8264.379: 0.1376% ( 3) 00:12:21.634 8264.379 - 8317.018: 0.1671% ( 3) 00:12:21.634 8317.018 - 8369.658: 0.1965% ( 3) 00:12:21.634 8369.658 - 8422.297: 0.2260% ( 3) 00:12:21.634 8422.297 - 8474.937: 0.2752% ( 5) 00:12:21.634 8474.937 - 8527.576: 0.3439% ( 7) 00:12:21.634 8527.576 - 8580.215: 0.4914% ( 15) 00:12:21.634 8580.215 - 8632.855: 0.5601% ( 7) 00:12:21.634 8632.855 - 8685.494: 0.5798% ( 2) 00:12:21.634 8685.494 - 8738.133: 0.6486% ( 7) 00:12:21.634 8738.133 - 8790.773: 0.8156% ( 17) 00:12:21.634 8790.773 - 8843.412: 1.0515% ( 24) 00:12:21.634 8843.412 - 8896.051: 1.1792% ( 13) 00:12:21.634 8896.051 - 8948.691: 1.2775% ( 10) 00:12:21.634 8948.691 - 9001.330: 1.5428% ( 27) 00:12:21.634 9001.330 - 9053.969: 1.9064% ( 37) 00:12:21.634 9053.969 - 9106.609: 2.2799% ( 38) 00:12:21.634 9106.609 - 9159.248: 2.7319% ( 46) 00:12:21.634 9159.248 - 9211.888: 3.4591% ( 74) 00:12:21.634 9211.888 - 9264.527: 4.2748% ( 83) 00:12:21.634 9264.527 - 9317.166: 5.1789% ( 92) 00:12:21.634 9317.166 - 9369.806: 5.9552% ( 79) 00:12:21.634 9369.806 - 9422.445: 6.9379% ( 100) 00:12:21.634 9422.445 - 9475.084: 8.2351% ( 132) 00:12:21.634 9475.084 - 9527.724: 9.7189% ( 151) 00:12:21.634 9527.724 - 9580.363: 11.6844% ( 200) 00:12:21.634 9580.363 - 9633.002: 14.5637% ( 293) 00:12:21.634 9633.002 - 9685.642: 16.8042% ( 228) 00:12:21.634 9685.642 - 9738.281: 19.1627% ( 240) 00:12:21.634 9738.281 - 9790.920: 21.4426% ( 232) 00:12:21.634 9790.920 - 9843.560: 23.7028% ( 230) 00:12:21.634 9843.560 - 9896.199: 25.9434% ( 228) 00:12:21.634 9896.199 - 9948.839: 28.3215% ( 242) 00:12:21.634 9948.839 - 10001.478: 30.8274% ( 255) 00:12:21.634 10001.478 - 10054.117: 33.3137% ( 253) 00:12:21.634 10054.117 - 10106.757: 35.1710% ( 189) 00:12:21.634 10106.757 - 10159.396: 36.7040% ( 156) 00:12:21.634 10159.396 - 10212.035: 38.0798% ( 140) 00:12:21.634 10212.035 - 10264.675: 39.4654% ( 141) 00:12:21.634 10264.675 - 10317.314: 40.9296% ( 149) 00:12:21.634 10317.314 - 10369.953: 42.4332% ( 153) 00:12:21.634 10369.953 - 10422.593: 43.8188% ( 141) 00:12:21.634 10422.593 - 10475.232: 45.1946% ( 140) 00:12:21.634 10475.232 - 10527.871: 46.2559% ( 108) 00:12:21.634 10527.871 - 10580.511: 47.1600% ( 92) 00:12:21.634 10580.511 - 10633.150: 47.7693% ( 62) 00:12:21.634 10633.150 - 10685.790: 48.5358% ( 78) 00:12:21.634 10685.790 - 10738.429: 49.2138% ( 69) 00:12:21.634 10738.429 - 10791.068: 49.9410% ( 74) 00:12:21.634 10791.068 - 10843.708: 50.6977% ( 77) 00:12:21.634 10843.708 - 10896.347: 51.4839% ( 80) 00:12:21.634 10896.347 - 10948.986: 52.1914% ( 72) 00:12:21.634 10948.986 - 11001.626: 52.7614% ( 58) 00:12:21.634 11001.626 - 11054.265: 53.3412% ( 59) 00:12:21.634 11054.265 - 11106.904: 53.7539% ( 42) 00:12:21.634 11106.904 - 11159.544: 54.1667% ( 42) 00:12:21.634 11159.544 - 11212.183: 54.6384% ( 48) 00:12:21.634 11212.183 - 11264.822: 55.3263% ( 70) 00:12:21.634 11264.822 - 11317.462: 55.9650% ( 65) 00:12:21.634 11317.462 - 11370.101: 56.4957% ( 54) 00:12:21.634 11370.101 - 11422.741: 56.8888% ( 40) 00:12:21.634 11422.741 - 11475.380: 57.1443% ( 26) 00:12:21.634 11475.380 - 11528.019: 57.3408% ( 20) 00:12:21.634 11528.019 - 11580.659: 57.5767% ( 24) 00:12:21.634 11580.659 - 11633.298: 57.7437% ( 17) 00:12:21.634 11633.298 - 11685.937: 58.0090% ( 27) 00:12:21.634 11685.937 - 11738.577: 58.4611% ( 46) 00:12:21.634 11738.577 - 11791.216: 58.9230% ( 47) 00:12:21.634 11791.216 - 11843.855: 59.1490% ( 23) 00:12:21.634 11843.855 - 11896.495: 59.2669% ( 12) 00:12:21.634 11896.495 - 11949.134: 59.4340% ( 17) 00:12:21.634 11949.134 - 12001.773: 59.5912% ( 16) 00:12:21.634 12001.773 - 12054.413: 59.8565% ( 27) 00:12:21.634 12054.413 - 12107.052: 60.0432% ( 19) 00:12:21.634 12107.052 - 12159.692: 60.2103% ( 17) 00:12:21.634 12159.692 - 12212.331: 60.3184% ( 11) 00:12:21.634 12212.331 - 12264.970: 60.5936% ( 28) 00:12:21.634 12264.970 - 12317.610: 60.8294% ( 24) 00:12:21.634 12317.610 - 12370.249: 61.0751% ( 25) 00:12:21.634 12370.249 - 12422.888: 61.3404% ( 27) 00:12:21.634 12422.888 - 12475.528: 61.7433% ( 41) 00:12:21.634 12475.528 - 12528.167: 62.2248% ( 49) 00:12:21.634 12528.167 - 12580.806: 62.5000% ( 28) 00:12:21.634 12580.806 - 12633.446: 62.8636% ( 37) 00:12:21.634 12633.446 - 12686.085: 63.2665% ( 41) 00:12:21.634 12686.085 - 12738.724: 63.7186% ( 46) 00:12:21.634 12738.724 - 12791.364: 63.9347% ( 22) 00:12:21.634 12791.364 - 12844.003: 64.1411% ( 21) 00:12:21.634 12844.003 - 12896.643: 64.4261% ( 29) 00:12:21.634 12896.643 - 12949.282: 64.6226% ( 20) 00:12:21.634 12949.282 - 13001.921: 64.9371% ( 32) 00:12:21.634 13001.921 - 13054.561: 65.2417% ( 31) 00:12:21.634 13054.561 - 13107.200: 65.3990% ( 16) 00:12:21.634 13107.200 - 13159.839: 65.6053% ( 21) 00:12:21.634 13159.839 - 13212.479: 65.8019% ( 20) 00:12:21.634 13212.479 - 13265.118: 66.0672% ( 27) 00:12:21.634 13265.118 - 13317.757: 66.4013% ( 34) 00:12:21.634 13317.757 - 13370.397: 66.8141% ( 42) 00:12:21.634 13370.397 - 13423.036: 67.0696% ( 26) 00:12:21.634 13423.036 - 13475.676: 67.4921% ( 43) 00:12:21.634 13475.676 - 13580.954: 68.2390% ( 76) 00:12:21.634 13580.954 - 13686.233: 68.8876% ( 66) 00:12:21.634 13686.233 - 13791.512: 69.3593% ( 48) 00:12:21.634 13791.512 - 13896.790: 69.8998% ( 55) 00:12:21.634 13896.790 - 14002.069: 70.3616% ( 47) 00:12:21.634 14002.069 - 14107.348: 70.9021% ( 55) 00:12:21.634 14107.348 - 14212.627: 71.2756% ( 38) 00:12:21.634 14212.627 - 14317.905: 71.5311% ( 26) 00:12:21.634 14317.905 - 14423.184: 72.0028% ( 48) 00:12:21.634 14423.184 - 14528.463: 72.4744% ( 48) 00:12:21.634 14528.463 - 14633.741: 72.8774% ( 41) 00:12:21.634 14633.741 - 14739.020: 73.4572% ( 59) 00:12:21.634 14739.020 - 14844.299: 74.2728% ( 83) 00:12:21.634 14844.299 - 14949.578: 74.9410% ( 68) 00:12:21.634 14949.578 - 15054.856: 75.6977% ( 77) 00:12:21.634 15054.856 - 15160.135: 76.2775% ( 59) 00:12:21.634 15160.135 - 15265.414: 76.9359% ( 67) 00:12:21.634 15265.414 - 15370.692: 77.6140% ( 69) 00:12:21.634 15370.692 - 15475.971: 78.3412% ( 74) 00:12:21.634 15475.971 - 15581.250: 79.0193% ( 69) 00:12:21.634 15581.250 - 15686.529: 79.7858% ( 78) 00:12:21.634 15686.529 - 15791.807: 80.3164% ( 54) 00:12:21.634 15791.807 - 15897.086: 81.0633% ( 76) 00:12:21.634 15897.086 - 16002.365: 81.8593% ( 81) 00:12:21.634 16002.365 - 16107.643: 82.6553% ( 81) 00:12:21.634 16107.643 - 16212.922: 83.3923% ( 75) 00:12:21.634 16212.922 - 16318.201: 84.0704% ( 69) 00:12:21.634 16318.201 - 16423.480: 84.6207% ( 56) 00:12:21.634 16423.480 - 16528.758: 85.3970% ( 79) 00:12:21.634 16528.758 - 16634.037: 85.8687% ( 48) 00:12:21.634 16634.037 - 16739.316: 86.3797% ( 52) 00:12:21.634 16739.316 - 16844.594: 86.8907% ( 52) 00:12:21.634 16844.594 - 16949.873: 87.3231% ( 44) 00:12:21.634 16949.873 - 17055.152: 87.7457% ( 43) 00:12:21.634 17055.152 - 17160.431: 88.3255% ( 59) 00:12:21.634 17160.431 - 17265.709: 89.2983% ( 99) 00:12:21.634 17265.709 - 17370.988: 89.8585% ( 57) 00:12:21.634 17370.988 - 17476.267: 90.3597% ( 51) 00:12:21.634 17476.267 - 17581.545: 90.9788% ( 63) 00:12:21.634 17581.545 - 17686.824: 91.5487% ( 58) 00:12:21.634 17686.824 - 17792.103: 91.9713% ( 43) 00:12:21.634 17792.103 - 17897.382: 92.4233% ( 46) 00:12:21.634 17897.382 - 18002.660: 92.8164% ( 40) 00:12:21.634 18002.660 - 18107.939: 93.3864% ( 58) 00:12:21.634 18107.939 - 18213.218: 93.9662% ( 59) 00:12:21.634 18213.218 - 18318.496: 94.4870% ( 53) 00:12:21.634 18318.496 - 18423.775: 94.9980% ( 52) 00:12:21.634 18423.775 - 18529.054: 95.3715% ( 38) 00:12:21.634 18529.054 - 18634.333: 95.6663% ( 30) 00:12:21.634 18634.333 - 18739.611: 95.9611% ( 30) 00:12:21.634 18739.611 - 18844.890: 96.3542% ( 40) 00:12:21.634 18844.890 - 18950.169: 96.7276% ( 38) 00:12:21.634 18950.169 - 19055.447: 97.0617% ( 34) 00:12:21.634 19055.447 - 19160.726: 97.2681% ( 21) 00:12:21.634 19160.726 - 19266.005: 97.5727% ( 31) 00:12:21.634 19266.005 - 19371.284: 97.7103% ( 14) 00:12:21.634 19371.284 - 19476.562: 97.8774% ( 17) 00:12:21.634 19476.562 - 19581.841: 98.0837% ( 21) 00:12:21.634 19581.841 - 19687.120: 98.2606% ( 18) 00:12:21.634 19687.120 - 19792.398: 98.4080% ( 15) 00:12:21.634 19792.398 - 19897.677: 98.4768% ( 7) 00:12:21.634 19897.677 - 20002.956: 98.5063% ( 3) 00:12:21.634 20002.956 - 20108.235: 98.5751% ( 7) 00:12:21.634 20318.792 - 20424.071: 98.5947% ( 2) 00:12:21.634 20424.071 - 20529.349: 98.6340% ( 4) 00:12:21.634 20529.349 - 20634.628: 98.6832% ( 5) 00:12:21.634 20634.628 - 20739.907: 98.7127% ( 3) 00:12:21.634 20739.907 - 20845.186: 98.7421% ( 3) 00:12:21.634 34320.861 - 34531.418: 98.8011% ( 6) 00:12:21.634 34531.418 - 34741.976: 98.8797% ( 8) 00:12:21.634 34741.976 - 34952.533: 98.9387% ( 6) 00:12:21.634 34952.533 - 35163.091: 99.0075% ( 7) 00:12:21.634 35163.091 - 35373.648: 99.0664% ( 6) 00:12:21.634 35373.648 - 35584.206: 99.1254% ( 6) 00:12:21.634 35584.206 - 35794.763: 99.1942% ( 7) 00:12:21.634 35794.763 - 36005.320: 99.2335% ( 4) 00:12:21.634 36005.320 - 36215.878: 99.3023% ( 7) 00:12:21.634 36215.878 - 36426.435: 99.3711% ( 7) 00:12:21.634 43374.831 - 43585.388: 99.3907% ( 2) 00:12:21.634 43585.388 - 43795.945: 99.4399% ( 5) 00:12:21.634 43795.945 - 44006.503: 99.4693% ( 3) 00:12:21.634 44006.503 - 44217.060: 99.5086% ( 4) 00:12:21.634 44217.060 - 44427.618: 99.5480% ( 4) 00:12:21.635 44427.618 - 44638.175: 99.5971% ( 5) 00:12:21.635 44638.175 - 44848.733: 99.6364% ( 4) 00:12:21.635 44848.733 - 45059.290: 99.6757% ( 4) 00:12:21.635 45059.290 - 45269.847: 99.7150% ( 4) 00:12:21.635 45269.847 - 45480.405: 99.7740% ( 6) 00:12:21.635 45480.405 - 45690.962: 99.8133% ( 4) 00:12:21.635 45690.962 - 45901.520: 99.8624% ( 5) 00:12:21.635 45901.520 - 46112.077: 99.9017% ( 4) 00:12:21.635 46112.077 - 46322.635: 99.9509% ( 5) 00:12:21.635 46322.635 - 46533.192: 99.9902% ( 4) 00:12:21.635 46533.192 - 46743.749: 100.0000% ( 1) 00:12:21.635 00:12:21.635 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:21.635 ============================================================================== 00:12:21.635 Range in us Cumulative IO count 00:12:21.635 8317.018 - 8369.658: 0.0197% ( 2) 00:12:21.635 8369.658 - 8422.297: 0.0688% ( 5) 00:12:21.635 8422.297 - 8474.937: 0.1572% ( 9) 00:12:21.635 8474.937 - 8527.576: 0.3931% ( 24) 00:12:21.635 8527.576 - 8580.215: 0.5307% ( 14) 00:12:21.635 8580.215 - 8632.855: 0.6289% ( 10) 00:12:21.635 8632.855 - 8685.494: 0.7469% ( 12) 00:12:21.635 8685.494 - 8738.133: 0.9139% ( 17) 00:12:21.635 8738.133 - 8790.773: 1.2284% ( 32) 00:12:21.635 8790.773 - 8843.412: 1.3463% ( 12) 00:12:21.635 8843.412 - 8896.051: 1.5527% ( 21) 00:12:21.635 8896.051 - 8948.691: 1.7394% ( 19) 00:12:21.635 8948.691 - 9001.330: 1.8475% ( 11) 00:12:21.635 9001.330 - 9053.969: 2.0047% ( 16) 00:12:21.635 9053.969 - 9106.609: 2.2995% ( 30) 00:12:21.635 9106.609 - 9159.248: 2.8204% ( 53) 00:12:21.635 9159.248 - 9211.888: 4.0291% ( 123) 00:12:21.635 9211.888 - 9264.527: 4.5106% ( 49) 00:12:21.635 9264.527 - 9317.166: 5.0904% ( 59) 00:12:21.635 9317.166 - 9369.806: 5.7783% ( 70) 00:12:21.635 9369.806 - 9422.445: 6.9182% ( 116) 00:12:21.635 9422.445 - 9475.084: 7.9894% ( 109) 00:12:21.635 9475.084 - 9527.724: 9.3652% ( 140) 00:12:21.635 9527.724 - 9580.363: 10.8589% ( 152) 00:12:21.635 9580.363 - 9633.002: 12.7948% ( 197) 00:12:21.635 9633.002 - 9685.642: 15.1336% ( 238) 00:12:21.635 9685.642 - 9738.281: 17.9344% ( 285) 00:12:21.635 9738.281 - 9790.920: 20.5090% ( 262) 00:12:21.635 9790.920 - 9843.560: 23.4670% ( 301) 00:12:21.635 9843.560 - 9896.199: 26.5330% ( 312) 00:12:21.635 9896.199 - 9948.839: 29.1667% ( 268) 00:12:21.635 9948.839 - 10001.478: 31.8494% ( 273) 00:12:21.635 10001.478 - 10054.117: 33.9917% ( 218) 00:12:21.635 10054.117 - 10106.757: 35.6918% ( 173) 00:12:21.635 10106.757 - 10159.396: 37.2740% ( 161) 00:12:21.635 10159.396 - 10212.035: 38.7873% ( 154) 00:12:21.635 10212.035 - 10264.675: 40.6053% ( 185) 00:12:21.635 10264.675 - 10317.314: 42.2072% ( 163) 00:12:21.635 10317.314 - 10369.953: 43.4257% ( 124) 00:12:21.635 10369.953 - 10422.593: 44.8899% ( 149) 00:12:21.635 10422.593 - 10475.232: 46.0594% ( 119) 00:12:21.635 10475.232 - 10527.871: 47.1207% ( 108) 00:12:21.635 10527.871 - 10580.511: 47.9756% ( 87) 00:12:21.635 10580.511 - 10633.150: 48.8208% ( 86) 00:12:21.635 10633.150 - 10685.790: 49.6266% ( 82) 00:12:21.635 10685.790 - 10738.429: 50.3341% ( 72) 00:12:21.635 10738.429 - 10791.068: 50.9139% ( 59) 00:12:21.635 10791.068 - 10843.708: 51.7492% ( 85) 00:12:21.635 10843.708 - 10896.347: 52.3192% ( 58) 00:12:21.635 10896.347 - 10948.986: 52.9383% ( 63) 00:12:21.635 10948.986 - 11001.626: 53.6065% ( 68) 00:12:21.635 11001.626 - 11054.265: 54.1175% ( 52) 00:12:21.635 11054.265 - 11106.904: 54.5499% ( 44) 00:12:21.635 11106.904 - 11159.544: 54.9233% ( 38) 00:12:21.635 11159.544 - 11212.183: 55.3263% ( 41) 00:12:21.635 11212.183 - 11264.822: 55.7488% ( 43) 00:12:21.635 11264.822 - 11317.462: 56.0829% ( 34) 00:12:21.635 11317.462 - 11370.101: 56.3778% ( 30) 00:12:21.635 11370.101 - 11422.741: 56.8200% ( 45) 00:12:21.635 11422.741 - 11475.380: 57.1148% ( 30) 00:12:21.635 11475.380 - 11528.019: 57.4980% ( 39) 00:12:21.635 11528.019 - 11580.659: 57.6847% ( 19) 00:12:21.635 11580.659 - 11633.298: 57.9304% ( 25) 00:12:21.635 11633.298 - 11685.937: 58.1761% ( 25) 00:12:21.635 11685.937 - 11738.577: 58.4807% ( 31) 00:12:21.635 11738.577 - 11791.216: 58.8247% ( 35) 00:12:21.635 11791.216 - 11843.855: 59.0409% ( 22) 00:12:21.635 11843.855 - 11896.495: 59.2669% ( 23) 00:12:21.635 11896.495 - 11949.134: 59.4340% ( 17) 00:12:21.635 11949.134 - 12001.773: 59.7091% ( 28) 00:12:21.635 12001.773 - 12054.413: 59.9351% ( 23) 00:12:21.635 12054.413 - 12107.052: 60.1022% ( 17) 00:12:21.635 12107.052 - 12159.692: 60.2398% ( 14) 00:12:21.635 12159.692 - 12212.331: 60.3381% ( 10) 00:12:21.635 12212.331 - 12264.970: 60.4560% ( 12) 00:12:21.635 12264.970 - 12317.610: 60.5837% ( 13) 00:12:21.635 12317.610 - 12370.249: 60.8196% ( 24) 00:12:21.635 12370.249 - 12422.888: 61.1635% ( 35) 00:12:21.635 12422.888 - 12475.528: 61.5075% ( 35) 00:12:21.635 12475.528 - 12528.167: 62.0086% ( 51) 00:12:21.635 12528.167 - 12580.806: 62.4410% ( 44) 00:12:21.635 12580.806 - 12633.446: 62.8439% ( 41) 00:12:21.635 12633.446 - 12686.085: 63.1584% ( 32) 00:12:21.635 12686.085 - 12738.724: 63.3746% ( 22) 00:12:21.635 12738.724 - 12791.364: 63.6105% ( 24) 00:12:21.635 12791.364 - 12844.003: 63.8365% ( 23) 00:12:21.635 12844.003 - 12896.643: 64.0527% ( 22) 00:12:21.635 12896.643 - 12949.282: 64.3868% ( 34) 00:12:21.635 12949.282 - 13001.921: 64.6816% ( 30) 00:12:21.635 13001.921 - 13054.561: 65.0452% ( 37) 00:12:21.635 13054.561 - 13107.200: 65.3597% ( 32) 00:12:21.635 13107.200 - 13159.839: 65.6643% ( 31) 00:12:21.635 13159.839 - 13212.479: 65.9002% ( 24) 00:12:21.635 13212.479 - 13265.118: 66.0672% ( 17) 00:12:21.635 13265.118 - 13317.757: 66.2736% ( 21) 00:12:21.635 13317.757 - 13370.397: 66.4308% ( 16) 00:12:21.635 13370.397 - 13423.036: 66.5782% ( 15) 00:12:21.635 13423.036 - 13475.676: 66.7256% ( 15) 00:12:21.635 13475.676 - 13580.954: 67.1384% ( 42) 00:12:21.635 13580.954 - 13686.233: 67.7771% ( 65) 00:12:21.635 13686.233 - 13791.512: 68.5633% ( 80) 00:12:21.635 13791.512 - 13896.790: 69.2807% ( 73) 00:12:21.635 13896.790 - 14002.069: 70.0275% ( 76) 00:12:21.635 14002.069 - 14107.348: 70.5680% ( 55) 00:12:21.635 14107.348 - 14212.627: 71.1380% ( 58) 00:12:21.635 14212.627 - 14317.905: 71.7079% ( 58) 00:12:21.635 14317.905 - 14423.184: 72.5138% ( 82) 00:12:21.635 14423.184 - 14528.463: 73.4178% ( 92) 00:12:21.635 14528.463 - 14633.741: 74.0566% ( 65) 00:12:21.635 14633.741 - 14739.020: 74.6364% ( 59) 00:12:21.635 14739.020 - 14844.299: 75.0983% ( 47) 00:12:21.635 14844.299 - 14949.578: 75.6289% ( 54) 00:12:21.635 14949.578 - 15054.856: 75.9925% ( 37) 00:12:21.635 15054.856 - 15160.135: 76.5035% ( 52) 00:12:21.635 15160.135 - 15265.414: 77.2013% ( 71) 00:12:21.635 15265.414 - 15370.692: 78.0071% ( 82) 00:12:21.635 15370.692 - 15475.971: 78.5770% ( 58) 00:12:21.635 15475.971 - 15581.250: 79.3829% ( 82) 00:12:21.635 15581.250 - 15686.529: 79.9725% ( 60) 00:12:21.635 15686.529 - 15791.807: 80.5326% ( 57) 00:12:21.635 15791.807 - 15897.086: 81.2598% ( 74) 00:12:21.635 15897.086 - 16002.365: 81.8101% ( 56) 00:12:21.635 16002.365 - 16107.643: 82.4292% ( 63) 00:12:21.635 16107.643 - 16212.922: 83.0877% ( 67) 00:12:21.635 16212.922 - 16318.201: 83.9524% ( 88) 00:12:21.635 16318.201 - 16423.480: 84.6502% ( 71) 00:12:21.635 16423.480 - 16528.758: 85.4363% ( 80) 00:12:21.635 16528.758 - 16634.037: 86.3699% ( 95) 00:12:21.635 16634.037 - 16739.316: 87.0381% ( 68) 00:12:21.635 16739.316 - 16844.594: 87.4017% ( 37) 00:12:21.635 16844.594 - 16949.873: 87.8439% ( 45) 00:12:21.635 16949.873 - 17055.152: 88.2960% ( 46) 00:12:21.635 17055.152 - 17160.431: 88.7480% ( 46) 00:12:21.635 17160.431 - 17265.709: 89.1608% ( 42) 00:12:21.635 17265.709 - 17370.988: 89.5539% ( 40) 00:12:21.635 17370.988 - 17476.267: 89.9076% ( 36) 00:12:21.635 17476.267 - 17581.545: 90.3302% ( 43) 00:12:21.635 17581.545 - 17686.824: 90.8412% ( 52) 00:12:21.635 17686.824 - 17792.103: 91.4210% ( 59) 00:12:21.635 17792.103 - 17897.382: 91.8534% ( 44) 00:12:21.635 17897.382 - 18002.660: 92.2465% ( 40) 00:12:21.635 18002.660 - 18107.939: 92.6690% ( 43) 00:12:21.635 18107.939 - 18213.218: 93.0228% ( 36) 00:12:21.635 18213.218 - 18318.496: 93.5829% ( 57) 00:12:21.635 18318.496 - 18423.775: 93.9760% ( 40) 00:12:21.635 18423.775 - 18529.054: 94.5853% ( 62) 00:12:21.635 18529.054 - 18634.333: 95.0177% ( 44) 00:12:21.635 18634.333 - 18739.611: 95.4403% ( 43) 00:12:21.635 18739.611 - 18844.890: 95.8235% ( 39) 00:12:21.635 18844.890 - 18950.169: 96.1773% ( 36) 00:12:21.635 18950.169 - 19055.447: 96.5507% ( 38) 00:12:21.635 19055.447 - 19160.726: 96.8848% ( 34) 00:12:21.635 19160.726 - 19266.005: 97.1207% ( 24) 00:12:21.635 19266.005 - 19371.284: 97.3664% ( 25) 00:12:21.635 19371.284 - 19476.562: 97.7594% ( 40) 00:12:21.635 19476.562 - 19581.841: 98.0641% ( 31) 00:12:21.635 19581.841 - 19687.120: 98.2311% ( 17) 00:12:21.635 19687.120 - 19792.398: 98.3589% ( 13) 00:12:21.635 19792.398 - 19897.677: 98.4768% ( 12) 00:12:21.635 19897.677 - 20002.956: 98.5554% ( 8) 00:12:21.635 20002.956 - 20108.235: 98.6144% ( 6) 00:12:21.635 20108.235 - 20213.513: 98.6733% ( 6) 00:12:21.635 20213.513 - 20318.792: 98.7323% ( 6) 00:12:21.635 20318.792 - 20424.071: 98.7421% ( 1) 00:12:21.635 32215.287 - 32425.844: 98.7618% ( 2) 00:12:21.635 32425.844 - 32636.402: 98.8306% ( 7) 00:12:21.635 32636.402 - 32846.959: 98.8994% ( 7) 00:12:21.635 32846.959 - 33057.516: 98.9682% ( 7) 00:12:21.636 33057.516 - 33268.074: 99.0271% ( 6) 00:12:21.636 33268.074 - 33478.631: 99.1057% ( 8) 00:12:21.636 33478.631 - 33689.189: 99.1647% ( 6) 00:12:21.636 33689.189 - 33899.746: 99.2335% ( 7) 00:12:21.636 33899.746 - 34110.304: 99.3023% ( 7) 00:12:21.636 34110.304 - 34320.861: 99.3711% ( 7) 00:12:21.636 41900.929 - 42111.486: 99.3809% ( 1) 00:12:21.636 42111.486 - 42322.043: 99.4202% ( 4) 00:12:21.636 42322.043 - 42532.601: 99.4693% ( 5) 00:12:21.636 42532.601 - 42743.158: 99.5185% ( 5) 00:12:21.636 42743.158 - 42953.716: 99.5676% ( 5) 00:12:21.636 42953.716 - 43164.273: 99.6069% ( 4) 00:12:21.636 43164.273 - 43374.831: 99.6462% ( 4) 00:12:21.636 43374.831 - 43585.388: 99.6954% ( 5) 00:12:21.636 43585.388 - 43795.945: 99.7445% ( 5) 00:12:21.636 43795.945 - 44006.503: 99.7936% ( 5) 00:12:21.636 44006.503 - 44217.060: 99.8329% ( 4) 00:12:21.636 44217.060 - 44427.618: 99.8821% ( 5) 00:12:21.636 44427.618 - 44638.175: 99.9312% ( 5) 00:12:21.636 44638.175 - 44848.733: 99.9803% ( 5) 00:12:21.636 44848.733 - 45059.290: 100.0000% ( 2) 00:12:21.636 00:12:21.636 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:21.636 ============================================================================== 00:12:21.636 Range in us Cumulative IO count 00:12:21.636 8317.018 - 8369.658: 0.0098% ( 1) 00:12:21.636 8369.658 - 8422.297: 0.0197% ( 1) 00:12:21.636 8422.297 - 8474.937: 0.0295% ( 1) 00:12:21.636 8527.576 - 8580.215: 0.0590% ( 3) 00:12:21.636 8580.215 - 8632.855: 0.1572% ( 10) 00:12:21.636 8632.855 - 8685.494: 0.3145% ( 16) 00:12:21.636 8685.494 - 8738.133: 0.5110% ( 20) 00:12:21.636 8738.133 - 8790.773: 0.8648% ( 36) 00:12:21.636 8790.773 - 8843.412: 1.1203% ( 26) 00:12:21.636 8843.412 - 8896.051: 1.3758% ( 26) 00:12:21.636 8896.051 - 8948.691: 1.6116% ( 24) 00:12:21.636 8948.691 - 9001.330: 1.9064% ( 30) 00:12:21.636 9001.330 - 9053.969: 2.1423% ( 24) 00:12:21.636 9053.969 - 9106.609: 2.4666% ( 33) 00:12:21.636 9106.609 - 9159.248: 2.9088% ( 45) 00:12:21.636 9159.248 - 9211.888: 3.2822% ( 38) 00:12:21.636 9211.888 - 9264.527: 4.0291% ( 76) 00:12:21.636 9264.527 - 9317.166: 4.6580% ( 64) 00:12:21.636 9317.166 - 9369.806: 5.5523% ( 91) 00:12:21.636 9369.806 - 9422.445: 6.7020% ( 117) 00:12:21.636 9422.445 - 9475.084: 7.6553% ( 97) 00:12:21.636 9475.084 - 9527.724: 9.3357% ( 171) 00:12:21.636 9527.724 - 9580.363: 11.0358% ( 173) 00:12:21.636 9580.363 - 9633.002: 13.1486% ( 215) 00:12:21.636 9633.002 - 9685.642: 15.1533% ( 204) 00:12:21.636 9685.642 - 9738.281: 17.6101% ( 250) 00:12:21.636 9738.281 - 9790.920: 19.6541% ( 208) 00:12:21.636 9790.920 - 9843.560: 21.8652% ( 225) 00:12:21.636 9843.560 - 9896.199: 23.6832% ( 185) 00:12:21.636 9896.199 - 9948.839: 25.6682% ( 202) 00:12:21.636 9948.839 - 10001.478: 27.8990% ( 227) 00:12:21.636 10001.478 - 10054.117: 30.1592% ( 230) 00:12:21.636 10054.117 - 10106.757: 32.9206% ( 281) 00:12:21.636 10106.757 - 10159.396: 35.6230% ( 275) 00:12:21.636 10159.396 - 10212.035: 37.5983% ( 201) 00:12:21.636 10212.035 - 10264.675: 39.6128% ( 205) 00:12:21.636 10264.675 - 10317.314: 41.3620% ( 178) 00:12:21.636 10317.314 - 10369.953: 42.6789% ( 134) 00:12:21.636 10369.953 - 10422.593: 44.1333% ( 148) 00:12:21.636 10422.593 - 10475.232: 45.4403% ( 133) 00:12:21.636 10475.232 - 10527.871: 46.4819% ( 106) 00:12:21.636 10527.871 - 10580.511: 48.1427% ( 169) 00:12:21.636 10580.511 - 10633.150: 49.6659% ( 155) 00:12:21.636 10633.150 - 10685.790: 50.6289% ( 98) 00:12:21.636 10685.790 - 10738.429: 51.4544% ( 84) 00:12:21.636 10738.429 - 10791.068: 52.4764% ( 104) 00:12:21.636 10791.068 - 10843.708: 53.0955% ( 63) 00:12:21.636 10843.708 - 10896.347: 53.6262% ( 54) 00:12:21.636 10896.347 - 10948.986: 53.9701% ( 35) 00:12:21.636 10948.986 - 11001.626: 54.2453% ( 28) 00:12:21.636 11001.626 - 11054.265: 54.5597% ( 32) 00:12:21.636 11054.265 - 11106.904: 54.7858% ( 23) 00:12:21.636 11106.904 - 11159.544: 54.9233% ( 14) 00:12:21.636 11159.544 - 11212.183: 55.1690% ( 25) 00:12:21.636 11212.183 - 11264.822: 55.5326% ( 37) 00:12:21.636 11264.822 - 11317.462: 55.8766% ( 35) 00:12:21.636 11317.462 - 11370.101: 56.0928% ( 22) 00:12:21.636 11370.101 - 11422.741: 56.4662% ( 38) 00:12:21.636 11422.741 - 11475.380: 56.8888% ( 43) 00:12:21.636 11475.380 - 11528.019: 57.1443% ( 26) 00:12:21.636 11528.019 - 11580.659: 57.4489% ( 31) 00:12:21.636 11580.659 - 11633.298: 57.5275% ( 8) 00:12:21.636 11633.298 - 11685.937: 57.5865% ( 6) 00:12:21.636 11685.937 - 11738.577: 57.6749% ( 9) 00:12:21.636 11738.577 - 11791.216: 57.7634% ( 9) 00:12:21.636 11791.216 - 11843.855: 57.9108% ( 15) 00:12:21.636 11843.855 - 11896.495: 58.2154% ( 31) 00:12:21.636 11896.495 - 11949.134: 58.4119% ( 20) 00:12:21.636 11949.134 - 12001.773: 58.5888% ( 18) 00:12:21.636 12001.773 - 12054.413: 58.7657% ( 18) 00:12:21.636 12054.413 - 12107.052: 59.0802% ( 32) 00:12:21.636 12107.052 - 12159.692: 59.4438% ( 37) 00:12:21.636 12159.692 - 12212.331: 59.7189% ( 28) 00:12:21.636 12212.331 - 12264.970: 60.0334% ( 32) 00:12:21.636 12264.970 - 12317.610: 60.4068% ( 38) 00:12:21.636 12317.610 - 12370.249: 60.7017% ( 30) 00:12:21.636 12370.249 - 12422.888: 61.0653% ( 37) 00:12:21.636 12422.888 - 12475.528: 61.4092% ( 35) 00:12:21.636 12475.528 - 12528.167: 61.8219% ( 42) 00:12:21.636 12528.167 - 12580.806: 62.1462% ( 33) 00:12:21.636 12580.806 - 12633.446: 62.5491% ( 41) 00:12:21.636 12633.446 - 12686.085: 62.8931% ( 35) 00:12:21.636 12686.085 - 12738.724: 63.1781% ( 29) 00:12:21.636 12738.724 - 12791.364: 63.5417% ( 37) 00:12:21.636 12791.364 - 12844.003: 64.0035% ( 47) 00:12:21.636 12844.003 - 12896.643: 64.3180% ( 32) 00:12:21.636 12896.643 - 12949.282: 64.6325% ( 32) 00:12:21.636 12949.282 - 13001.921: 64.9666% ( 34) 00:12:21.636 13001.921 - 13054.561: 65.1828% ( 22) 00:12:21.636 13054.561 - 13107.200: 65.3498% ( 17) 00:12:21.636 13107.200 - 13159.839: 65.5464% ( 20) 00:12:21.636 13159.839 - 13212.479: 65.7528% ( 21) 00:12:21.636 13212.479 - 13265.118: 65.9788% ( 23) 00:12:21.636 13265.118 - 13317.757: 66.3424% ( 37) 00:12:21.636 13317.757 - 13370.397: 66.7846% ( 45) 00:12:21.636 13370.397 - 13423.036: 67.0696% ( 29) 00:12:21.636 13423.036 - 13475.676: 67.4823% ( 42) 00:12:21.636 13475.676 - 13580.954: 68.1211% ( 65) 00:12:21.636 13580.954 - 13686.233: 68.6419% ( 53) 00:12:21.636 13686.233 - 13791.512: 69.3003% ( 67) 00:12:21.636 13791.512 - 13896.790: 69.8113% ( 52) 00:12:21.636 13896.790 - 14002.069: 70.4108% ( 61) 00:12:21.636 14002.069 - 14107.348: 70.9906% ( 59) 00:12:21.636 14107.348 - 14212.627: 71.4033% ( 42) 00:12:21.636 14212.627 - 14317.905: 72.0126% ( 62) 00:12:21.636 14317.905 - 14423.184: 72.6513% ( 65) 00:12:21.636 14423.184 - 14528.463: 73.0837% ( 44) 00:12:21.636 14528.463 - 14633.741: 73.7618% ( 69) 00:12:21.636 14633.741 - 14739.020: 74.3121% ( 56) 00:12:21.636 14739.020 - 14844.299: 75.0098% ( 71) 00:12:21.636 14844.299 - 14949.578: 76.1891% ( 120) 00:12:21.636 14949.578 - 15054.856: 77.2504% ( 108) 00:12:21.636 15054.856 - 15160.135: 77.8400% ( 60) 00:12:21.636 15160.135 - 15265.414: 78.2626% ( 43) 00:12:21.636 15265.414 - 15370.692: 78.7146% ( 46) 00:12:21.636 15370.692 - 15475.971: 79.2060% ( 50) 00:12:21.636 15475.971 - 15581.250: 79.8644% ( 67) 00:12:21.636 15581.250 - 15686.529: 80.4442% ( 59) 00:12:21.636 15686.529 - 15791.807: 81.1616% ( 73) 00:12:21.636 15791.807 - 15897.086: 81.6824% ( 53) 00:12:21.636 15897.086 - 16002.365: 82.1344% ( 46) 00:12:21.636 16002.365 - 16107.643: 82.7535% ( 63) 00:12:21.636 16107.643 - 16212.922: 83.6183% ( 88) 00:12:21.636 16212.922 - 16318.201: 84.2571% ( 65) 00:12:21.636 16318.201 - 16423.480: 84.7583% ( 51) 00:12:21.636 16423.480 - 16528.758: 85.3479% ( 60) 00:12:21.636 16528.758 - 16634.037: 85.9965% ( 66) 00:12:21.636 16634.037 - 16739.316: 86.4878% ( 50) 00:12:21.636 16739.316 - 16844.594: 86.8711% ( 39) 00:12:21.636 16844.594 - 16949.873: 87.1855% ( 32) 00:12:21.636 16949.873 - 17055.152: 87.7850% ( 61) 00:12:21.636 17055.152 - 17160.431: 88.3255% ( 55) 00:12:21.636 17160.431 - 17265.709: 88.8856% ( 57) 00:12:21.636 17265.709 - 17370.988: 89.3671% ( 49) 00:12:21.636 17370.988 - 17476.267: 90.1140% ( 76) 00:12:21.636 17476.267 - 17581.545: 90.7724% ( 67) 00:12:21.636 17581.545 - 17686.824: 91.5684% ( 81) 00:12:21.636 17686.824 - 17792.103: 92.4725% ( 92) 00:12:21.636 17792.103 - 17897.382: 93.1899% ( 73) 00:12:21.636 17897.382 - 18002.660: 93.9269% ( 75) 00:12:21.636 18002.660 - 18107.939: 94.5067% ( 59) 00:12:21.636 18107.939 - 18213.218: 95.0668% ( 57) 00:12:21.636 18213.218 - 18318.496: 95.6368% ( 58) 00:12:21.636 18318.496 - 18423.775: 96.1478% ( 52) 00:12:21.636 18423.775 - 18529.054: 96.5016% ( 36) 00:12:21.636 18529.054 - 18634.333: 96.7178% ( 22) 00:12:21.636 18634.333 - 18739.611: 96.9241% ( 21) 00:12:21.636 18739.611 - 18844.890: 97.0912% ( 17) 00:12:21.636 18844.890 - 18950.169: 97.1895% ( 10) 00:12:21.636 18950.169 - 19055.447: 97.2976% ( 11) 00:12:21.636 19055.447 - 19160.726: 97.4155% ( 12) 00:12:21.636 19160.726 - 19266.005: 97.5334% ( 12) 00:12:21.636 19266.005 - 19371.284: 97.6513% ( 12) 00:12:21.636 19371.284 - 19476.562: 97.7300% ( 8) 00:12:21.636 19476.562 - 19581.841: 97.8184% ( 9) 00:12:21.636 19581.841 - 19687.120: 97.9167% ( 10) 00:12:21.636 19687.120 - 19792.398: 98.0149% ( 10) 00:12:21.636 19792.398 - 19897.677: 98.1034% ( 9) 00:12:21.636 19897.677 - 20002.956: 98.3097% ( 21) 00:12:21.636 20002.956 - 20108.235: 98.4670% ( 16) 00:12:21.636 20108.235 - 20213.513: 98.6046% ( 14) 00:12:21.636 20213.513 - 20318.792: 98.6242% ( 2) 00:12:21.636 20318.792 - 20424.071: 98.6537% ( 3) 00:12:21.637 20424.071 - 20529.349: 98.6832% ( 3) 00:12:21.637 20529.349 - 20634.628: 98.7127% ( 3) 00:12:21.637 20634.628 - 20739.907: 98.7323% ( 2) 00:12:21.637 20739.907 - 20845.186: 98.7421% ( 1) 00:12:21.637 31373.057 - 31583.614: 98.7814% ( 4) 00:12:21.637 31583.614 - 31794.172: 98.8502% ( 7) 00:12:21.637 31794.172 - 32004.729: 98.9190% ( 7) 00:12:21.637 32004.729 - 32215.287: 98.9976% ( 8) 00:12:21.637 32215.287 - 32425.844: 99.0664% ( 7) 00:12:21.637 32425.844 - 32636.402: 99.1352% ( 7) 00:12:21.637 32636.402 - 32846.959: 99.1942% ( 6) 00:12:21.637 32846.959 - 33057.516: 99.2728% ( 8) 00:12:21.637 33057.516 - 33268.074: 99.3318% ( 6) 00:12:21.637 33268.074 - 33478.631: 99.3711% ( 4) 00:12:21.637 41690.371 - 41900.929: 99.3809% ( 1) 00:12:21.637 41900.929 - 42111.486: 99.4497% ( 7) 00:12:21.637 42111.486 - 42322.043: 99.5185% ( 7) 00:12:21.637 42322.043 - 42532.601: 99.5676% ( 5) 00:12:21.637 42532.601 - 42743.158: 99.6364% ( 7) 00:12:21.637 42743.158 - 42953.716: 99.7052% ( 7) 00:12:21.637 42953.716 - 43164.273: 99.7740% ( 7) 00:12:21.637 43164.273 - 43374.831: 99.8428% ( 7) 00:12:21.637 43374.831 - 43585.388: 99.9116% ( 7) 00:12:21.637 43585.388 - 43795.945: 99.9803% ( 7) 00:12:21.637 43795.945 - 44006.503: 100.0000% ( 2) 00:12:21.637 00:12:21.637 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:21.637 ============================================================================== 00:12:21.637 Range in us Cumulative IO count 00:12:21.637 8317.018 - 8369.658: 0.0491% ( 5) 00:12:21.637 8369.658 - 8422.297: 0.1278% ( 8) 00:12:21.637 8422.297 - 8474.937: 0.2457% ( 12) 00:12:21.637 8474.937 - 8527.576: 0.4226% ( 18) 00:12:21.637 8527.576 - 8580.215: 0.6289% ( 21) 00:12:21.637 8580.215 - 8632.855: 0.7763% ( 15) 00:12:21.637 8632.855 - 8685.494: 1.0122% ( 24) 00:12:21.637 8685.494 - 8738.133: 1.1301% ( 12) 00:12:21.637 8738.133 - 8790.773: 1.2284% ( 10) 00:12:21.637 8790.773 - 8843.412: 1.3660% ( 14) 00:12:21.637 8843.412 - 8896.051: 1.4937% ( 13) 00:12:21.637 8896.051 - 8948.691: 1.7296% ( 24) 00:12:21.637 8948.691 - 9001.330: 2.0440% ( 32) 00:12:21.637 9001.330 - 9053.969: 2.3781% ( 34) 00:12:21.637 9053.969 - 9106.609: 2.6828% ( 31) 00:12:21.637 9106.609 - 9159.248: 3.0071% ( 33) 00:12:21.637 9159.248 - 9211.888: 3.3314% ( 33) 00:12:21.637 9211.888 - 9264.527: 3.7736% ( 45) 00:12:21.637 9264.527 - 9317.166: 4.4320% ( 67) 00:12:21.637 9317.166 - 9369.806: 5.2378% ( 82) 00:12:21.637 9369.806 - 9422.445: 6.1910% ( 97) 00:12:21.637 9422.445 - 9475.084: 7.2327% ( 106) 00:12:21.637 9475.084 - 9527.724: 8.5790% ( 137) 00:12:21.637 9527.724 - 9580.363: 10.1612% ( 161) 00:12:21.637 9580.363 - 9633.002: 12.3329% ( 221) 00:12:21.637 9633.002 - 9685.642: 14.5637% ( 227) 00:12:21.637 9685.642 - 9738.281: 17.0991% ( 258) 00:12:21.637 9738.281 - 9790.920: 19.2610% ( 220) 00:12:21.637 9790.920 - 9843.560: 21.7374% ( 252) 00:12:21.637 9843.560 - 9896.199: 23.9485% ( 225) 00:12:21.637 9896.199 - 9948.839: 26.3070% ( 240) 00:12:21.637 9948.839 - 10001.478: 28.5869% ( 232) 00:12:21.637 10001.478 - 10054.117: 30.9159% ( 237) 00:12:21.637 10054.117 - 10106.757: 33.2056% ( 233) 00:12:21.637 10106.757 - 10159.396: 35.6132% ( 245) 00:12:21.637 10159.396 - 10212.035: 37.6081% ( 203) 00:12:21.637 10212.035 - 10264.675: 39.5047% ( 193) 00:12:21.637 10264.675 - 10317.314: 41.2441% ( 177) 00:12:21.637 10317.314 - 10369.953: 42.8557% ( 164) 00:12:21.637 10369.953 - 10422.593: 44.0055% ( 117) 00:12:21.637 10422.593 - 10475.232: 45.0865% ( 110) 00:12:21.637 10475.232 - 10527.871: 46.4131% ( 135) 00:12:21.637 10527.871 - 10580.511: 47.1698% ( 77) 00:12:21.637 10580.511 - 10633.150: 48.2410% ( 109) 00:12:21.637 10633.150 - 10685.790: 49.2040% ( 98) 00:12:21.637 10685.790 - 10738.429: 49.7642% ( 57) 00:12:21.637 10738.429 - 10791.068: 50.2064% ( 45) 00:12:21.637 10791.068 - 10843.708: 50.5994% ( 40) 00:12:21.637 10843.708 - 10896.347: 51.0318% ( 44) 00:12:21.637 10896.347 - 10948.986: 51.5625% ( 54) 00:12:21.637 10948.986 - 11001.626: 52.2013% ( 65) 00:12:21.637 11001.626 - 11054.265: 52.8400% ( 65) 00:12:21.637 11054.265 - 11106.904: 53.2626% ( 43) 00:12:21.637 11106.904 - 11159.544: 53.7834% ( 53) 00:12:21.637 11159.544 - 11212.183: 54.3042% ( 53) 00:12:21.637 11212.183 - 11264.822: 54.9430% ( 65) 00:12:21.637 11264.822 - 11317.462: 55.4540% ( 52) 00:12:21.637 11317.462 - 11370.101: 55.9552% ( 51) 00:12:21.637 11370.101 - 11422.741: 56.4269% ( 48) 00:12:21.637 11422.741 - 11475.380: 56.8396% ( 42) 00:12:21.637 11475.380 - 11528.019: 57.3605% ( 53) 00:12:21.637 11528.019 - 11580.659: 57.7437% ( 39) 00:12:21.637 11580.659 - 11633.298: 58.0287% ( 29) 00:12:21.637 11633.298 - 11685.937: 58.2252% ( 20) 00:12:21.637 11685.937 - 11738.577: 58.3726% ( 15) 00:12:21.637 11738.577 - 11791.216: 58.5299% ( 16) 00:12:21.637 11791.216 - 11843.855: 58.6871% ( 16) 00:12:21.637 11843.855 - 11896.495: 58.9230% ( 24) 00:12:21.637 11896.495 - 11949.134: 59.0311% ( 11) 00:12:21.637 11949.134 - 12001.773: 59.1686% ( 14) 00:12:21.637 12001.773 - 12054.413: 59.3947% ( 23) 00:12:21.637 12054.413 - 12107.052: 59.6305% ( 24) 00:12:21.637 12107.052 - 12159.692: 59.9057% ( 28) 00:12:21.637 12159.692 - 12212.331: 60.2791% ( 38) 00:12:21.637 12212.331 - 12264.970: 60.6623% ( 39) 00:12:21.637 12264.970 - 12317.610: 61.1046% ( 45) 00:12:21.637 12317.610 - 12370.249: 61.6156% ( 52) 00:12:21.637 12370.249 - 12422.888: 61.9202% ( 31) 00:12:21.637 12422.888 - 12475.528: 62.1659% ( 25) 00:12:21.637 12475.528 - 12528.167: 62.3526% ( 19) 00:12:21.637 12528.167 - 12580.806: 62.5098% ( 16) 00:12:21.637 12580.806 - 12633.446: 62.7358% ( 23) 00:12:21.637 12633.446 - 12686.085: 62.9324% ( 20) 00:12:21.637 12686.085 - 12738.724: 63.1093% ( 18) 00:12:21.637 12738.724 - 12791.364: 63.3353% ( 23) 00:12:21.637 12791.364 - 12844.003: 63.6006% ( 27) 00:12:21.637 12844.003 - 12896.643: 63.8168% ( 22) 00:12:21.637 12896.643 - 12949.282: 64.0428% ( 23) 00:12:21.637 12949.282 - 13001.921: 64.4458% ( 41) 00:12:21.637 13001.921 - 13054.561: 64.6423% ( 20) 00:12:21.637 13054.561 - 13107.200: 64.8388% ( 20) 00:12:21.637 13107.200 - 13159.839: 65.2024% ( 37) 00:12:21.637 13159.839 - 13212.479: 65.5955% ( 40) 00:12:21.637 13212.479 - 13265.118: 65.9395% ( 35) 00:12:21.637 13265.118 - 13317.757: 66.3915% ( 46) 00:12:21.637 13317.757 - 13370.397: 66.8436% ( 46) 00:12:21.637 13370.397 - 13423.036: 67.1777% ( 34) 00:12:21.637 13423.036 - 13475.676: 67.3939% ( 22) 00:12:21.637 13475.676 - 13580.954: 67.7575% ( 37) 00:12:21.637 13580.954 - 13686.233: 68.2783% ( 53) 00:12:21.637 13686.233 - 13791.512: 68.7303% ( 46) 00:12:21.637 13791.512 - 13896.790: 69.3101% ( 59) 00:12:21.637 13896.790 - 14002.069: 70.1847% ( 89) 00:12:21.637 14002.069 - 14107.348: 70.8628% ( 69) 00:12:21.637 14107.348 - 14212.627: 71.6490% ( 80) 00:12:21.637 14212.627 - 14317.905: 72.4646% ( 83) 00:12:21.637 14317.905 - 14423.184: 73.2410% ( 79) 00:12:21.637 14423.184 - 14528.463: 73.9387% ( 71) 00:12:21.637 14528.463 - 14633.741: 75.0197% ( 110) 00:12:21.637 14633.741 - 14739.020: 75.9041% ( 90) 00:12:21.637 14739.020 - 14844.299: 76.4839% ( 59) 00:12:21.637 14844.299 - 14949.578: 77.0145% ( 54) 00:12:21.637 14949.578 - 15054.856: 77.5354% ( 53) 00:12:21.637 15054.856 - 15160.135: 77.8498% ( 32) 00:12:21.637 15160.135 - 15265.414: 78.1348% ( 29) 00:12:21.637 15265.414 - 15370.692: 78.5476% ( 42) 00:12:21.637 15370.692 - 15475.971: 78.9800% ( 44) 00:12:21.637 15475.971 - 15581.250: 79.4320% ( 46) 00:12:21.637 15581.250 - 15686.529: 79.8153% ( 39) 00:12:21.637 15686.529 - 15791.807: 80.1789% ( 37) 00:12:21.637 15791.807 - 15897.086: 80.7881% ( 62) 00:12:21.637 15897.086 - 16002.365: 81.4269% ( 65) 00:12:21.637 16002.365 - 16107.643: 81.9772% ( 56) 00:12:21.637 16107.643 - 16212.922: 82.4686% ( 50) 00:12:21.637 16212.922 - 16318.201: 82.8813% ( 42) 00:12:21.637 16318.201 - 16423.480: 83.2449% ( 37) 00:12:21.637 16423.480 - 16528.758: 83.9328% ( 70) 00:12:21.637 16528.758 - 16634.037: 84.7976% ( 88) 00:12:21.637 16634.037 - 16739.316: 85.8491% ( 107) 00:12:21.637 16739.316 - 16844.594: 86.6647% ( 83) 00:12:21.637 16844.594 - 16949.873: 87.3821% ( 73) 00:12:21.637 16949.873 - 17055.152: 88.1879% ( 82) 00:12:21.637 17055.152 - 17160.431: 88.6498% ( 47) 00:12:21.637 17160.431 - 17265.709: 89.4654% ( 83) 00:12:21.637 17265.709 - 17370.988: 90.1926% ( 74) 00:12:21.637 17370.988 - 17476.267: 91.0574% ( 88) 00:12:21.637 17476.267 - 17581.545: 91.9517% ( 91) 00:12:21.637 17581.545 - 17686.824: 92.6395% ( 70) 00:12:21.637 17686.824 - 17792.103: 93.3962% ( 77) 00:12:21.637 17792.103 - 17897.382: 94.3593% ( 98) 00:12:21.637 17897.382 - 18002.660: 95.0570% ( 71) 00:12:21.637 18002.660 - 18107.939: 95.4501% ( 40) 00:12:21.637 18107.939 - 18213.218: 95.8530% ( 41) 00:12:21.637 18213.218 - 18318.496: 96.2068% ( 36) 00:12:21.637 18318.496 - 18423.775: 96.6392% ( 44) 00:12:21.637 18423.775 - 18529.054: 96.8848% ( 25) 00:12:21.637 18529.054 - 18634.333: 97.0617% ( 18) 00:12:21.637 18634.333 - 18739.611: 97.2189% ( 16) 00:12:21.637 18739.611 - 18844.890: 97.3664% ( 15) 00:12:21.637 18844.890 - 18950.169: 97.5138% ( 15) 00:12:21.637 18950.169 - 19055.447: 97.6317% ( 12) 00:12:21.637 19055.447 - 19160.726: 97.7300% ( 10) 00:12:21.637 19160.726 - 19266.005: 97.8184% ( 9) 00:12:21.637 19266.005 - 19371.284: 97.8872% ( 7) 00:12:21.637 19371.284 - 19476.562: 97.9461% ( 6) 00:12:21.638 19476.562 - 19581.841: 98.0149% ( 7) 00:12:21.638 19581.841 - 19687.120: 98.0542% ( 4) 00:12:21.638 19687.120 - 19792.398: 98.0837% ( 3) 00:12:21.638 19792.398 - 19897.677: 98.1230% ( 4) 00:12:21.638 19897.677 - 20002.956: 98.1623% ( 4) 00:12:21.638 20002.956 - 20108.235: 98.1918% ( 3) 00:12:21.638 20108.235 - 20213.513: 98.4375% ( 25) 00:12:21.638 20213.513 - 20318.792: 98.6046% ( 17) 00:12:21.638 20318.792 - 20424.071: 98.6242% ( 2) 00:12:21.638 20424.071 - 20529.349: 98.6537% ( 3) 00:12:21.638 20529.349 - 20634.628: 98.6832% ( 3) 00:12:21.638 20634.628 - 20739.907: 98.7028% ( 2) 00:12:21.638 20739.907 - 20845.186: 98.7323% ( 3) 00:12:21.638 20845.186 - 20950.464: 98.7421% ( 1) 00:12:21.638 29478.040 - 29688.598: 98.8011% ( 6) 00:12:21.638 29688.598 - 29899.155: 98.8699% ( 7) 00:12:21.638 29899.155 - 30109.712: 98.9387% ( 7) 00:12:21.638 30109.712 - 30320.270: 99.0075% ( 7) 00:12:21.638 30320.270 - 30530.827: 99.0763% ( 7) 00:12:21.638 30530.827 - 30741.385: 99.1352% ( 6) 00:12:21.638 30741.385 - 30951.942: 99.2040% ( 7) 00:12:21.638 30951.942 - 31162.500: 99.2728% ( 7) 00:12:21.638 31162.500 - 31373.057: 99.3416% ( 7) 00:12:21.638 31373.057 - 31583.614: 99.3711% ( 3) 00:12:21.638 40216.469 - 40427.027: 99.3809% ( 1) 00:12:21.638 40427.027 - 40637.584: 99.4497% ( 7) 00:12:21.638 40637.584 - 40848.141: 99.5185% ( 7) 00:12:21.638 40848.141 - 41058.699: 99.5873% ( 7) 00:12:21.638 41058.699 - 41269.256: 99.6561% ( 7) 00:12:21.638 41269.256 - 41479.814: 99.7248% ( 7) 00:12:21.638 41479.814 - 41690.371: 99.7838% ( 6) 00:12:21.638 41690.371 - 41900.929: 99.8526% ( 7) 00:12:21.638 41900.929 - 42111.486: 99.9214% ( 7) 00:12:21.638 42111.486 - 42322.043: 99.9902% ( 7) 00:12:21.638 42322.043 - 42532.601: 100.0000% ( 1) 00:12:21.638 00:12:21.638 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:21.638 ============================================================================== 00:12:21.638 Range in us Cumulative IO count 00:12:21.638 8211.740 - 8264.379: 0.0098% ( 1) 00:12:21.638 8264.379 - 8317.018: 0.0197% ( 1) 00:12:21.638 8369.658 - 8422.297: 0.0491% ( 3) 00:12:21.638 8422.297 - 8474.937: 0.0884% ( 4) 00:12:21.638 8474.937 - 8527.576: 0.1671% ( 8) 00:12:21.638 8527.576 - 8580.215: 0.2457% ( 8) 00:12:21.638 8580.215 - 8632.855: 0.4029% ( 16) 00:12:21.638 8632.855 - 8685.494: 0.4717% ( 7) 00:12:21.638 8685.494 - 8738.133: 0.5700% ( 10) 00:12:21.638 8738.133 - 8790.773: 0.7075% ( 14) 00:12:21.638 8790.773 - 8843.412: 0.9139% ( 21) 00:12:21.638 8843.412 - 8896.051: 1.1792% ( 27) 00:12:21.638 8896.051 - 8948.691: 1.6804% ( 51) 00:12:21.638 8948.691 - 9001.330: 2.0833% ( 41) 00:12:21.638 9001.330 - 9053.969: 2.3388% ( 26) 00:12:21.638 9053.969 - 9106.609: 2.7516% ( 42) 00:12:21.638 9106.609 - 9159.248: 3.3117% ( 57) 00:12:21.638 9159.248 - 9211.888: 3.7932% ( 49) 00:12:21.638 9211.888 - 9264.527: 4.4222% ( 64) 00:12:21.638 9264.527 - 9317.166: 4.8939% ( 48) 00:12:21.638 9317.166 - 9369.806: 5.3164% ( 43) 00:12:21.638 9369.806 - 9422.445: 5.9552% ( 65) 00:12:21.638 9422.445 - 9475.084: 6.7119% ( 77) 00:12:21.638 9475.084 - 9527.724: 8.0582% ( 137) 00:12:21.638 9527.724 - 9580.363: 9.8664% ( 184) 00:12:21.638 9580.363 - 9633.002: 12.1167% ( 229) 00:12:21.638 9633.002 - 9685.642: 14.3082% ( 223) 00:12:21.638 9685.642 - 9738.281: 16.9615% ( 270) 00:12:21.638 9738.281 - 9790.920: 19.1333% ( 221) 00:12:21.638 9790.920 - 9843.560: 21.3247% ( 223) 00:12:21.638 9843.560 - 9896.199: 23.9780% ( 270) 00:12:21.638 9896.199 - 9948.839: 26.8180% ( 289) 00:12:21.638 9948.839 - 10001.478: 30.1002% ( 334) 00:12:21.638 10001.478 - 10054.117: 32.9108% ( 286) 00:12:21.638 10054.117 - 10106.757: 35.3282% ( 246) 00:12:21.638 10106.757 - 10159.396: 38.0798% ( 280) 00:12:21.638 10159.396 - 10212.035: 39.5342% ( 148) 00:12:21.638 10212.035 - 10264.675: 40.6545% ( 114) 00:12:21.638 10264.675 - 10317.314: 41.6961% ( 106) 00:12:21.638 10317.314 - 10369.953: 42.6297% ( 95) 00:12:21.638 10369.953 - 10422.593: 43.4355% ( 82) 00:12:21.638 10422.593 - 10475.232: 44.6246% ( 121) 00:12:21.638 10475.232 - 10527.871: 45.5189% ( 91) 00:12:21.638 10527.871 - 10580.511: 46.6195% ( 112) 00:12:21.638 10580.511 - 10633.150: 47.8282% ( 123) 00:12:21.638 10633.150 - 10685.790: 48.8601% ( 105) 00:12:21.638 10685.790 - 10738.429: 49.7838% ( 94) 00:12:21.638 10738.429 - 10791.068: 50.5503% ( 78) 00:12:21.638 10791.068 - 10843.708: 51.1105% ( 57) 00:12:21.638 10843.708 - 10896.347: 51.5232% ( 42) 00:12:21.638 10896.347 - 10948.986: 51.8475% ( 33) 00:12:21.638 10948.986 - 11001.626: 52.0735% ( 23) 00:12:21.638 11001.626 - 11054.265: 52.6042% ( 54) 00:12:21.638 11054.265 - 11106.904: 52.9186% ( 32) 00:12:21.638 11106.904 - 11159.544: 53.2331% ( 32) 00:12:21.638 11159.544 - 11212.183: 53.5279% ( 30) 00:12:21.638 11212.183 - 11264.822: 53.8227% ( 30) 00:12:21.638 11264.822 - 11317.462: 54.2944% ( 48) 00:12:21.638 11317.462 - 11370.101: 54.9037% ( 62) 00:12:21.638 11370.101 - 11422.741: 55.1592% ( 26) 00:12:21.638 11422.741 - 11475.380: 55.4540% ( 30) 00:12:21.638 11475.380 - 11528.019: 55.6997% ( 25) 00:12:21.638 11528.019 - 11580.659: 55.9650% ( 27) 00:12:21.638 11580.659 - 11633.298: 56.2500% ( 29) 00:12:21.638 11633.298 - 11685.937: 56.6234% ( 38) 00:12:21.638 11685.937 - 11738.577: 57.0263% ( 41) 00:12:21.638 11738.577 - 11791.216: 57.4587% ( 44) 00:12:21.638 11791.216 - 11843.855: 57.8911% ( 44) 00:12:21.638 11843.855 - 11896.495: 58.4218% ( 54) 00:12:21.638 11896.495 - 11949.134: 59.0212% ( 61) 00:12:21.638 11949.134 - 12001.773: 59.6502% ( 64) 00:12:21.638 12001.773 - 12054.413: 60.4560% ( 82) 00:12:21.638 12054.413 - 12107.052: 61.1046% ( 66) 00:12:21.638 12107.052 - 12159.692: 61.4485% ( 35) 00:12:21.638 12159.692 - 12212.331: 61.8416% ( 40) 00:12:21.638 12212.331 - 12264.970: 62.1659% ( 33) 00:12:21.638 12264.970 - 12317.610: 62.3624% ( 20) 00:12:21.638 12317.610 - 12370.249: 62.5393% ( 18) 00:12:21.638 12370.249 - 12422.888: 62.6671% ( 13) 00:12:21.638 12422.888 - 12475.528: 62.7555% ( 9) 00:12:21.638 12475.528 - 12528.167: 62.8046% ( 5) 00:12:21.638 12528.167 - 12580.806: 62.8734% ( 7) 00:12:21.638 12580.806 - 12633.446: 62.9520% ( 8) 00:12:21.638 12633.446 - 12686.085: 63.0405% ( 9) 00:12:21.638 12686.085 - 12738.724: 63.2174% ( 18) 00:12:21.638 12738.724 - 12791.364: 63.5024% ( 29) 00:12:21.638 12791.364 - 12844.003: 63.5810% ( 8) 00:12:21.638 12844.003 - 12896.643: 63.7873% ( 21) 00:12:21.638 12896.643 - 12949.282: 64.0428% ( 26) 00:12:21.638 12949.282 - 13001.921: 64.2492% ( 21) 00:12:21.638 13001.921 - 13054.561: 64.4949% ( 25) 00:12:21.638 13054.561 - 13107.200: 64.6423% ( 15) 00:12:21.638 13107.200 - 13159.839: 64.7897% ( 15) 00:12:21.638 13159.839 - 13212.479: 64.9666% ( 18) 00:12:21.638 13212.479 - 13265.118: 65.2123% ( 25) 00:12:21.638 13265.118 - 13317.757: 65.5464% ( 34) 00:12:21.638 13317.757 - 13370.397: 65.7626% ( 22) 00:12:21.638 13370.397 - 13423.036: 66.1360% ( 38) 00:12:21.638 13423.036 - 13475.676: 66.4210% ( 29) 00:12:21.638 13475.676 - 13580.954: 66.9025% ( 49) 00:12:21.638 13580.954 - 13686.233: 67.5806% ( 69) 00:12:21.638 13686.233 - 13791.512: 68.5240% ( 96) 00:12:21.638 13791.512 - 13896.790: 69.4379% ( 93) 00:12:21.638 13896.790 - 14002.069: 70.2339% ( 81) 00:12:21.638 14002.069 - 14107.348: 70.9611% ( 74) 00:12:21.638 14107.348 - 14212.627: 71.6195% ( 67) 00:12:21.638 14212.627 - 14317.905: 72.3270% ( 72) 00:12:21.638 14317.905 - 14423.184: 72.9953% ( 68) 00:12:21.638 14423.184 - 14528.463: 73.4768% ( 49) 00:12:21.638 14528.463 - 14633.741: 73.9878% ( 52) 00:12:21.638 14633.741 - 14739.020: 74.5971% ( 62) 00:12:21.638 14739.020 - 14844.299: 75.4619% ( 88) 00:12:21.638 14844.299 - 14949.578: 76.0613% ( 61) 00:12:21.638 14949.578 - 15054.856: 76.7492% ( 70) 00:12:21.638 15054.856 - 15160.135: 77.6336% ( 90) 00:12:21.638 15160.135 - 15265.414: 78.3903% ( 77) 00:12:21.638 15265.414 - 15370.692: 79.0193% ( 64) 00:12:21.638 15370.692 - 15475.971: 79.5794% ( 57) 00:12:21.638 15475.971 - 15581.250: 80.0511% ( 48) 00:12:21.638 15581.250 - 15686.529: 80.4245% ( 38) 00:12:21.638 15686.529 - 15791.807: 80.7980% ( 38) 00:12:21.638 15791.807 - 15897.086: 81.1419% ( 35) 00:12:21.638 15897.086 - 16002.365: 81.6726% ( 54) 00:12:21.638 16002.365 - 16107.643: 82.2032% ( 54) 00:12:21.639 16107.643 - 16212.922: 82.7044% ( 51) 00:12:21.639 16212.922 - 16318.201: 83.4021% ( 71) 00:12:21.639 16318.201 - 16423.480: 84.0114% ( 62) 00:12:21.639 16423.480 - 16528.758: 84.6305% ( 63) 00:12:21.639 16528.758 - 16634.037: 85.2791% ( 66) 00:12:21.639 16634.037 - 16739.316: 86.0947% ( 83) 00:12:21.639 16739.316 - 16844.594: 86.8121% ( 73) 00:12:21.639 16844.594 - 16949.873: 87.8243% ( 103) 00:12:21.639 16949.873 - 17055.152: 88.8660% ( 106) 00:12:21.639 17055.152 - 17160.431: 89.9371% ( 109) 00:12:21.639 17160.431 - 17265.709: 90.6741% ( 75) 00:12:21.639 17265.709 - 17370.988: 91.2244% ( 56) 00:12:21.639 17370.988 - 17476.267: 91.8829% ( 67) 00:12:21.639 17476.267 - 17581.545: 92.3447% ( 47) 00:12:21.639 17581.545 - 17686.824: 92.8263% ( 49) 00:12:21.639 17686.824 - 17792.103: 93.3864% ( 57) 00:12:21.639 17792.103 - 17897.382: 93.9858% ( 61) 00:12:21.639 17897.382 - 18002.660: 94.7524% ( 78) 00:12:21.639 18002.660 - 18107.939: 95.1946% ( 45) 00:12:21.639 18107.939 - 18213.218: 95.5385% ( 35) 00:12:21.639 18213.218 - 18318.496: 95.8432% ( 31) 00:12:21.639 18318.496 - 18423.775: 96.1576% ( 32) 00:12:21.639 18423.775 - 18529.054: 96.4328% ( 28) 00:12:21.639 18529.054 - 18634.333: 96.7669% ( 34) 00:12:21.639 18634.333 - 18739.611: 97.0126% ( 25) 00:12:21.639 18739.611 - 18844.890: 97.3762% ( 37) 00:12:21.639 18844.890 - 18950.169: 97.5825% ( 21) 00:12:21.639 18950.169 - 19055.447: 97.7103% ( 13) 00:12:21.639 19055.447 - 19160.726: 97.8381% ( 13) 00:12:21.639 19160.726 - 19266.005: 97.9363% ( 10) 00:12:21.639 19266.005 - 19371.284: 97.9756% ( 4) 00:12:21.639 19371.284 - 19476.562: 98.0149% ( 4) 00:12:21.639 19476.562 - 19581.841: 98.0542% ( 4) 00:12:21.639 19581.841 - 19687.120: 98.0837% ( 3) 00:12:21.639 19687.120 - 19792.398: 98.1230% ( 4) 00:12:21.639 19792.398 - 19897.677: 98.1623% ( 4) 00:12:21.639 19897.677 - 20002.956: 98.4277% ( 27) 00:12:21.639 20002.956 - 20108.235: 98.5947% ( 17) 00:12:21.639 20108.235 - 20213.513: 98.6242% ( 3) 00:12:21.639 20213.513 - 20318.792: 98.6537% ( 3) 00:12:21.639 20318.792 - 20424.071: 98.6832% ( 3) 00:12:21.639 20424.071 - 20529.349: 98.7127% ( 3) 00:12:21.639 20529.349 - 20634.628: 98.7323% ( 2) 00:12:21.639 20634.628 - 20739.907: 98.7421% ( 1) 00:12:21.639 27583.023 - 27793.581: 98.8011% ( 6) 00:12:21.639 27793.581 - 28004.138: 98.8699% ( 7) 00:12:21.639 28004.138 - 28214.696: 98.9387% ( 7) 00:12:21.639 28214.696 - 28425.253: 99.0075% ( 7) 00:12:21.639 28425.253 - 28635.810: 99.0664% ( 6) 00:12:21.639 28635.810 - 28846.368: 99.1352% ( 7) 00:12:21.639 28846.368 - 29056.925: 99.2040% ( 7) 00:12:21.639 29056.925 - 29267.483: 99.2728% ( 7) 00:12:21.639 29267.483 - 29478.040: 99.3514% ( 8) 00:12:21.639 29478.040 - 29688.598: 99.3711% ( 2) 00:12:21.639 38742.567 - 38953.124: 99.4300% ( 6) 00:12:21.639 38953.124 - 39163.682: 99.4988% ( 7) 00:12:21.639 39163.682 - 39374.239: 99.5578% ( 6) 00:12:21.639 39374.239 - 39584.797: 99.6266% ( 7) 00:12:21.639 39584.797 - 39795.354: 99.6954% ( 7) 00:12:21.639 39795.354 - 40005.912: 99.7642% ( 7) 00:12:21.639 40005.912 - 40216.469: 99.8329% ( 7) 00:12:21.639 40216.469 - 40427.027: 99.9017% ( 7) 00:12:21.639 40427.027 - 40637.584: 99.9607% ( 6) 00:12:21.639 40637.584 - 40848.141: 100.0000% ( 4) 00:12:21.639 00:12:21.639 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:21.639 ============================================================================== 00:12:21.639 Range in us Cumulative IO count 00:12:21.639 8211.740 - 8264.379: 0.0098% ( 1) 00:12:21.639 8317.018 - 8369.658: 0.0391% ( 3) 00:12:21.639 8369.658 - 8422.297: 0.0977% ( 6) 00:12:21.639 8422.297 - 8474.937: 0.1758% ( 8) 00:12:21.639 8474.937 - 8527.576: 0.2539% ( 8) 00:12:21.639 8527.576 - 8580.215: 0.4102% ( 16) 00:12:21.639 8580.215 - 8632.855: 0.4883% ( 8) 00:12:21.639 8632.855 - 8685.494: 0.5664% ( 8) 00:12:21.639 8685.494 - 8738.133: 0.6250% ( 6) 00:12:21.639 8738.133 - 8790.773: 0.6445% ( 2) 00:12:21.639 8790.773 - 8843.412: 0.7422% ( 10) 00:12:21.639 8843.412 - 8896.051: 0.8496% ( 11) 00:12:21.639 8896.051 - 8948.691: 1.0547% ( 21) 00:12:21.639 8948.691 - 9001.330: 1.2207% ( 17) 00:12:21.639 9001.330 - 9053.969: 1.4453% ( 23) 00:12:21.639 9053.969 - 9106.609: 1.6699% ( 23) 00:12:21.639 9106.609 - 9159.248: 2.1289% ( 47) 00:12:21.639 9159.248 - 9211.888: 2.5684% ( 45) 00:12:21.639 9211.888 - 9264.527: 3.2812% ( 73) 00:12:21.639 9264.527 - 9317.166: 4.0430% ( 78) 00:12:21.639 9317.166 - 9369.806: 5.0586% ( 104) 00:12:21.639 9369.806 - 9422.445: 5.8691% ( 83) 00:12:21.639 9422.445 - 9475.084: 7.2168% ( 138) 00:12:21.639 9475.084 - 9527.724: 8.6133% ( 143) 00:12:21.639 9527.724 - 9580.363: 10.1074% ( 153) 00:12:21.639 9580.363 - 9633.002: 11.8164% ( 175) 00:12:21.639 9633.002 - 9685.642: 14.1406% ( 238) 00:12:21.639 9685.642 - 9738.281: 16.8066% ( 273) 00:12:21.639 9738.281 - 9790.920: 19.3457% ( 260) 00:12:21.639 9790.920 - 9843.560: 22.5586% ( 329) 00:12:21.639 9843.560 - 9896.199: 25.6738% ( 319) 00:12:21.639 9896.199 - 9948.839: 28.2031% ( 259) 00:12:21.639 9948.839 - 10001.478: 30.7715% ( 263) 00:12:21.639 10001.478 - 10054.117: 33.0176% ( 230) 00:12:21.639 10054.117 - 10106.757: 35.3516% ( 239) 00:12:21.639 10106.757 - 10159.396: 37.0117% ( 170) 00:12:21.639 10159.396 - 10212.035: 38.6426% ( 167) 00:12:21.639 10212.035 - 10264.675: 40.0488% ( 144) 00:12:21.639 10264.675 - 10317.314: 41.4453% ( 143) 00:12:21.639 10317.314 - 10369.953: 43.4473% ( 205) 00:12:21.639 10369.953 - 10422.593: 44.8828% ( 147) 00:12:21.639 10422.593 - 10475.232: 45.8691% ( 101) 00:12:21.639 10475.232 - 10527.871: 46.6602% ( 81) 00:12:21.639 10527.871 - 10580.511: 47.4121% ( 77) 00:12:21.639 10580.511 - 10633.150: 47.9004% ( 50) 00:12:21.639 10633.150 - 10685.790: 48.3887% ( 50) 00:12:21.639 10685.790 - 10738.429: 48.9551% ( 58) 00:12:21.639 10738.429 - 10791.068: 49.6094% ( 67) 00:12:21.639 10791.068 - 10843.708: 50.1465% ( 55) 00:12:21.639 10843.708 - 10896.347: 51.0449% ( 92) 00:12:21.639 10896.347 - 10948.986: 51.6211% ( 59) 00:12:21.639 10948.986 - 11001.626: 52.1094% ( 50) 00:12:21.639 11001.626 - 11054.265: 52.5781% ( 48) 00:12:21.639 11054.265 - 11106.904: 52.9785% ( 41) 00:12:21.639 11106.904 - 11159.544: 53.5254% ( 56) 00:12:21.639 11159.544 - 11212.183: 53.8867% ( 37) 00:12:21.639 11212.183 - 11264.822: 54.4238% ( 55) 00:12:21.639 11264.822 - 11317.462: 55.0391% ( 63) 00:12:21.639 11317.462 - 11370.101: 55.3613% ( 33) 00:12:21.639 11370.101 - 11422.741: 55.5371% ( 18) 00:12:21.639 11422.741 - 11475.380: 55.8594% ( 33) 00:12:21.639 11475.380 - 11528.019: 56.1426% ( 29) 00:12:21.639 11528.019 - 11580.659: 56.3965% ( 26) 00:12:21.639 11580.659 - 11633.298: 56.7188% ( 33) 00:12:21.639 11633.298 - 11685.937: 57.1484% ( 44) 00:12:21.639 11685.937 - 11738.577: 57.4707% ( 33) 00:12:21.639 11738.577 - 11791.216: 57.7051% ( 24) 00:12:21.639 11791.216 - 11843.855: 57.9199% ( 22) 00:12:21.639 11843.855 - 11896.495: 58.3203% ( 41) 00:12:21.639 11896.495 - 11949.134: 58.5938% ( 28) 00:12:21.639 11949.134 - 12001.773: 59.1602% ( 58) 00:12:21.639 12001.773 - 12054.413: 59.4629% ( 31) 00:12:21.639 12054.413 - 12107.052: 59.6777% ( 22) 00:12:21.639 12107.052 - 12159.692: 59.8047% ( 13) 00:12:21.639 12159.692 - 12212.331: 59.9707% ( 17) 00:12:21.639 12212.331 - 12264.970: 60.1660% ( 20) 00:12:21.639 12264.970 - 12317.610: 60.3906% ( 23) 00:12:21.639 12317.610 - 12370.249: 60.6152% ( 23) 00:12:21.639 12370.249 - 12422.888: 60.8887% ( 28) 00:12:21.639 12422.888 - 12475.528: 61.2109% ( 33) 00:12:21.639 12475.528 - 12528.167: 61.6992% ( 50) 00:12:21.639 12528.167 - 12580.806: 62.1875% ( 50) 00:12:21.639 12580.806 - 12633.446: 62.5684% ( 39) 00:12:21.639 12633.446 - 12686.085: 62.7832% ( 22) 00:12:21.639 12686.085 - 12738.724: 63.1055% ( 33) 00:12:21.639 12738.724 - 12791.364: 63.3691% ( 27) 00:12:21.639 12791.364 - 12844.003: 63.6133% ( 25) 00:12:21.639 12844.003 - 12896.643: 63.9062% ( 30) 00:12:21.639 12896.643 - 12949.282: 64.1797% ( 28) 00:12:21.639 12949.282 - 13001.921: 64.4629% ( 29) 00:12:21.639 13001.921 - 13054.561: 64.6875% ( 23) 00:12:21.639 13054.561 - 13107.200: 65.1758% ( 50) 00:12:21.639 13107.200 - 13159.839: 65.5957% ( 43) 00:12:21.639 13159.839 - 13212.479: 65.8594% ( 27) 00:12:21.639 13212.479 - 13265.118: 66.0938% ( 24) 00:12:21.639 13265.118 - 13317.757: 66.4844% ( 40) 00:12:21.639 13317.757 - 13370.397: 66.6211% ( 14) 00:12:21.639 13370.397 - 13423.036: 66.8164% ( 20) 00:12:21.639 13423.036 - 13475.676: 66.9727% ( 16) 00:12:21.639 13475.676 - 13580.954: 67.3926% ( 43) 00:12:21.639 13580.954 - 13686.233: 67.8223% ( 44) 00:12:21.639 13686.233 - 13791.512: 68.4082% ( 60) 00:12:21.639 13791.512 - 13896.790: 68.9160% ( 52) 00:12:21.639 13896.790 - 14002.069: 69.7266% ( 83) 00:12:21.639 14002.069 - 14107.348: 70.4590% ( 75) 00:12:21.639 14107.348 - 14212.627: 71.1816% ( 74) 00:12:21.639 14212.627 - 14317.905: 71.7383% ( 57) 00:12:21.639 14317.905 - 14423.184: 72.2852% ( 56) 00:12:21.639 14423.184 - 14528.463: 73.3105% ( 105) 00:12:21.639 14528.463 - 14633.741: 73.8086% ( 51) 00:12:21.639 14633.741 - 14739.020: 74.3164% ( 52) 00:12:21.639 14739.020 - 14844.299: 74.9512% ( 65) 00:12:21.639 14844.299 - 14949.578: 75.4004% ( 46) 00:12:21.639 14949.578 - 15054.856: 75.9570% ( 57) 00:12:21.639 15054.856 - 15160.135: 76.7773% ( 84) 00:12:21.639 15160.135 - 15265.414: 77.3340% ( 57) 00:12:21.639 15265.414 - 15370.692: 77.8613% ( 54) 00:12:21.640 15370.692 - 15475.971: 78.4961% ( 65) 00:12:21.640 15475.971 - 15581.250: 79.3750% ( 90) 00:12:21.640 15581.250 - 15686.529: 79.9707% ( 61) 00:12:21.640 15686.529 - 15791.807: 80.5078% ( 55) 00:12:21.640 15791.807 - 15897.086: 81.1621% ( 67) 00:12:21.640 15897.086 - 16002.365: 81.8359% ( 69) 00:12:21.640 16002.365 - 16107.643: 82.2266% ( 40) 00:12:21.640 16107.643 - 16212.922: 82.7930% ( 58) 00:12:21.640 16212.922 - 16318.201: 83.2715% ( 49) 00:12:21.640 16318.201 - 16423.480: 83.7988% ( 54) 00:12:21.640 16423.480 - 16528.758: 84.1699% ( 38) 00:12:21.640 16528.758 - 16634.037: 84.5898% ( 43) 00:12:21.640 16634.037 - 16739.316: 85.3320% ( 76) 00:12:21.640 16739.316 - 16844.594: 86.1328% ( 82) 00:12:21.640 16844.594 - 16949.873: 86.9727% ( 86) 00:12:21.640 16949.873 - 17055.152: 87.7734% ( 82) 00:12:21.640 17055.152 - 17160.431: 88.5742% ( 82) 00:12:21.640 17160.431 - 17265.709: 89.6387% ( 109) 00:12:21.640 17265.709 - 17370.988: 90.6348% ( 102) 00:12:21.640 17370.988 - 17476.267: 91.1719% ( 55) 00:12:21.640 17476.267 - 17581.545: 92.1094% ( 96) 00:12:21.640 17581.545 - 17686.824: 92.7832% ( 69) 00:12:21.640 17686.824 - 17792.103: 93.2520% ( 48) 00:12:21.640 17792.103 - 17897.382: 93.8574% ( 62) 00:12:21.640 17897.382 - 18002.660: 94.4434% ( 60) 00:12:21.640 18002.660 - 18107.939: 94.9414% ( 51) 00:12:21.640 18107.939 - 18213.218: 95.4980% ( 57) 00:12:21.640 18213.218 - 18318.496: 95.8398% ( 35) 00:12:21.640 18318.496 - 18423.775: 96.2500% ( 42) 00:12:21.640 18423.775 - 18529.054: 96.4941% ( 25) 00:12:21.640 18529.054 - 18634.333: 96.7480% ( 26) 00:12:21.640 18634.333 - 18739.611: 97.1289% ( 39) 00:12:21.640 18739.611 - 18844.890: 97.3535% ( 23) 00:12:21.640 18844.890 - 18950.169: 97.5391% ( 19) 00:12:21.640 18950.169 - 19055.447: 97.7930% ( 26) 00:12:21.640 19055.447 - 19160.726: 97.9688% ( 18) 00:12:21.640 19160.726 - 19266.005: 98.1152% ( 15) 00:12:21.640 19266.005 - 19371.284: 98.2129% ( 10) 00:12:21.640 19371.284 - 19476.562: 98.2715% ( 6) 00:12:21.640 19476.562 - 19581.841: 98.3496% ( 8) 00:12:21.640 19581.841 - 19687.120: 98.5059% ( 16) 00:12:21.640 19687.120 - 19792.398: 98.7891% ( 29) 00:12:21.640 19792.398 - 19897.677: 98.9551% ( 17) 00:12:21.640 19897.677 - 20002.956: 99.0234% ( 7) 00:12:21.640 20002.956 - 20108.235: 99.0820% ( 6) 00:12:21.640 20108.235 - 20213.513: 99.1406% ( 6) 00:12:21.640 20213.513 - 20318.792: 99.1992% ( 6) 00:12:21.640 20318.792 - 20424.071: 99.2480% ( 5) 00:12:21.640 20424.071 - 20529.349: 99.2871% ( 4) 00:12:21.640 20529.349 - 20634.628: 99.3164% ( 3) 00:12:21.640 20634.628 - 20739.907: 99.3555% ( 4) 00:12:21.640 20739.907 - 20845.186: 99.3750% ( 2) 00:12:21.640 26951.351 - 27161.908: 99.3848% ( 1) 00:12:21.640 27161.908 - 27372.466: 99.4141% ( 3) 00:12:21.640 27372.466 - 27583.023: 99.4629% ( 5) 00:12:21.640 27583.023 - 27793.581: 99.5117% ( 5) 00:12:21.640 27793.581 - 28004.138: 99.5605% ( 5) 00:12:21.640 28004.138 - 28214.696: 99.6094% ( 5) 00:12:21.640 28214.696 - 28425.253: 99.6484% ( 4) 00:12:21.640 28425.253 - 28635.810: 99.6973% ( 5) 00:12:21.640 28635.810 - 28846.368: 99.7461% ( 5) 00:12:21.640 28846.368 - 29056.925: 99.7949% ( 5) 00:12:21.640 29056.925 - 29267.483: 99.8438% ( 5) 00:12:21.640 29267.483 - 29478.040: 99.9023% ( 6) 00:12:21.640 29478.040 - 29688.598: 99.9414% ( 4) 00:12:21.640 29688.598 - 29899.155: 99.9902% ( 5) 00:12:21.640 29899.155 - 30109.712: 100.0000% ( 1) 00:12:21.640 00:12:21.640 20:30:29 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:21.640 00:12:21.640 real 0m2.688s 00:12:21.640 user 0m2.277s 00:12:21.640 sys 0m0.312s 00:12:21.640 ************************************ 00:12:21.640 20:30:29 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.640 20:30:29 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:21.640 END TEST nvme_perf 00:12:21.640 ************************************ 00:12:21.640 20:30:29 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:21.640 20:30:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.640 20:30:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.640 20:30:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.640 ************************************ 00:12:21.640 START TEST nvme_hello_world 00:12:21.640 ************************************ 00:12:21.640 20:30:29 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:21.897 Initializing NVMe Controllers 00:12:21.897 Attached to 0000:00:10.0 00:12:21.897 Namespace ID: 1 size: 6GB 00:12:21.897 Attached to 0000:00:11.0 00:12:21.897 Namespace ID: 1 size: 5GB 00:12:21.897 Attached to 0000:00:13.0 00:12:21.897 Namespace ID: 1 size: 1GB 00:12:21.897 Attached to 0000:00:12.0 00:12:21.897 Namespace ID: 1 size: 4GB 00:12:21.897 Namespace ID: 2 size: 4GB 00:12:21.897 Namespace ID: 3 size: 4GB 00:12:21.897 Initialization complete. 00:12:21.897 INFO: using host memory buffer for IO 00:12:21.897 Hello world! 00:12:21.897 INFO: using host memory buffer for IO 00:12:21.897 Hello world! 00:12:21.897 INFO: using host memory buffer for IO 00:12:21.897 Hello world! 00:12:21.897 INFO: using host memory buffer for IO 00:12:21.897 Hello world! 00:12:21.897 INFO: using host memory buffer for IO 00:12:21.897 Hello world! 00:12:21.897 INFO: using host memory buffer for IO 00:12:21.897 Hello world! 00:12:21.897 00:12:21.897 real 0m0.319s 00:12:21.897 user 0m0.119s 00:12:21.897 sys 0m0.150s 00:12:21.897 20:30:29 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.897 ************************************ 00:12:21.897 END TEST nvme_hello_world 00:12:21.897 ************************************ 00:12:21.897 20:30:29 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:21.897 20:30:29 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:21.897 20:30:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.897 20:30:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.897 20:30:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.897 ************************************ 00:12:21.897 START TEST nvme_sgl 00:12:21.897 ************************************ 00:12:21.897 20:30:29 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:22.153 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:22.153 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:22.153 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:22.153 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:22.153 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:22.153 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:22.153 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:22.153 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:22.153 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:22.412 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:22.412 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:22.412 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:22.412 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:22.412 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:22.412 NVMe Readv/Writev Request test 00:12:22.412 Attached to 0000:00:10.0 00:12:22.412 Attached to 0000:00:11.0 00:12:22.412 Attached to 0000:00:13.0 00:12:22.412 Attached to 0000:00:12.0 00:12:22.412 0000:00:10.0: build_io_request_2 test passed 00:12:22.412 0000:00:10.0: build_io_request_4 test passed 00:12:22.412 0000:00:10.0: build_io_request_5 test passed 00:12:22.412 0000:00:10.0: build_io_request_6 test passed 00:12:22.412 0000:00:10.0: build_io_request_7 test passed 00:12:22.412 0000:00:10.0: build_io_request_10 test passed 00:12:22.412 0000:00:11.0: build_io_request_2 test passed 00:12:22.412 0000:00:11.0: build_io_request_4 test passed 00:12:22.412 0000:00:11.0: build_io_request_5 test passed 00:12:22.412 0000:00:11.0: build_io_request_6 test passed 00:12:22.412 0000:00:11.0: build_io_request_7 test passed 00:12:22.412 0000:00:11.0: build_io_request_10 test passed 00:12:22.412 Cleaning up... 00:12:22.412 00:12:22.412 real 0m0.364s 00:12:22.412 user 0m0.184s 00:12:22.412 sys 0m0.139s 00:12:22.412 20:30:30 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.412 ************************************ 00:12:22.412 END TEST nvme_sgl 00:12:22.412 ************************************ 00:12:22.412 20:30:30 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:22.412 20:30:30 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:22.412 20:30:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.412 20:30:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.412 20:30:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.412 ************************************ 00:12:22.412 START TEST nvme_e2edp 00:12:22.412 ************************************ 00:12:22.412 20:30:30 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:22.669 NVMe Write/Read with End-to-End data protection test 00:12:22.669 Attached to 0000:00:10.0 00:12:22.669 Attached to 0000:00:11.0 00:12:22.669 Attached to 0000:00:13.0 00:12:22.669 Attached to 0000:00:12.0 00:12:22.669 Cleaning up... 00:12:22.669 00:12:22.669 real 0m0.288s 00:12:22.669 user 0m0.106s 00:12:22.669 sys 0m0.135s 00:12:22.669 20:30:30 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.669 ************************************ 00:12:22.669 END TEST nvme_e2edp 00:12:22.669 ************************************ 00:12:22.669 20:30:30 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:22.669 20:30:30 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:22.669 20:30:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.669 20:30:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.669 20:30:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.669 ************************************ 00:12:22.669 START TEST nvme_reserve 00:12:22.669 ************************************ 00:12:22.669 20:30:30 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:23.233 ===================================================== 00:12:23.233 NVMe Controller at PCI bus 0, device 16, function 0 00:12:23.233 ===================================================== 00:12:23.233 Reservations: Not Supported 00:12:23.233 ===================================================== 00:12:23.233 NVMe Controller at PCI bus 0, device 17, function 0 00:12:23.233 ===================================================== 00:12:23.233 Reservations: Not Supported 00:12:23.233 ===================================================== 00:12:23.233 NVMe Controller at PCI bus 0, device 19, function 0 00:12:23.233 ===================================================== 00:12:23.233 Reservations: Not Supported 00:12:23.233 ===================================================== 00:12:23.233 NVMe Controller at PCI bus 0, device 18, function 0 00:12:23.233 ===================================================== 00:12:23.233 Reservations: Not Supported 00:12:23.233 Reservation test passed 00:12:23.233 00:12:23.233 real 0m0.290s 00:12:23.233 user 0m0.095s 00:12:23.233 sys 0m0.143s 00:12:23.233 20:30:31 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.233 ************************************ 00:12:23.233 END TEST nvme_reserve 00:12:23.233 ************************************ 00:12:23.233 20:30:31 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:23.233 20:30:31 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:23.233 20:30:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.233 20:30:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.234 20:30:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.234 ************************************ 00:12:23.234 START TEST nvme_err_injection 00:12:23.234 ************************************ 00:12:23.234 20:30:31 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:23.491 NVMe Error Injection test 00:12:23.491 Attached to 0000:00:10.0 00:12:23.491 Attached to 0000:00:11.0 00:12:23.491 Attached to 0000:00:13.0 00:12:23.491 Attached to 0000:00:12.0 00:12:23.491 0000:00:10.0: get features failed as expected 00:12:23.491 0000:00:11.0: get features failed as expected 00:12:23.491 0000:00:13.0: get features failed as expected 00:12:23.491 0000:00:12.0: get features failed as expected 00:12:23.491 0000:00:13.0: get features successfully as expected 00:12:23.491 0000:00:12.0: get features successfully as expected 00:12:23.491 0000:00:10.0: get features successfully as expected 00:12:23.491 0000:00:11.0: get features successfully as expected 00:12:23.491 0000:00:11.0: read failed as expected 00:12:23.491 0000:00:10.0: read failed as expected 00:12:23.491 0000:00:13.0: read failed as expected 00:12:23.491 0000:00:12.0: read failed as expected 00:12:23.491 0000:00:11.0: read successfully as expected 00:12:23.491 0000:00:13.0: read successfully as expected 00:12:23.491 0000:00:12.0: read successfully as expected 00:12:23.491 0000:00:10.0: read successfully as expected 00:12:23.491 Cleaning up... 00:12:23.491 00:12:23.491 real 0m0.336s 00:12:23.491 user 0m0.127s 00:12:23.491 sys 0m0.164s 00:12:23.491 20:30:31 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.491 ************************************ 00:12:23.491 END TEST nvme_err_injection 00:12:23.491 ************************************ 00:12:23.491 20:30:31 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:23.491 20:30:31 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:23.491 20:30:31 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:23.491 20:30:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.491 20:30:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.491 ************************************ 00:12:23.491 START TEST nvme_overhead 00:12:23.491 ************************************ 00:12:23.491 20:30:31 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:24.866 Initializing NVMe Controllers 00:12:24.866 Attached to 0000:00:10.0 00:12:24.866 Attached to 0000:00:11.0 00:12:24.866 Attached to 0000:00:13.0 00:12:24.866 Attached to 0000:00:12.0 00:12:24.866 Initialization complete. Launching workers. 00:12:24.866 submit (in ns) avg, min, max = 14819.9, 11413.7, 123237.8 00:12:24.866 complete (in ns) avg, min, max = 9659.0, 7935.7, 65832.1 00:12:24.866 00:12:24.866 Submit histogram 00:12:24.866 ================ 00:12:24.866 Range in us Cumulative Count 00:12:24.866 11.412 - 11.463: 0.0175% ( 1) 00:12:24.866 11.875 - 11.926: 0.0350% ( 1) 00:12:24.866 12.029 - 12.080: 0.0874% ( 3) 00:12:24.866 12.080 - 12.132: 0.1573% ( 4) 00:12:24.866 12.132 - 12.183: 0.2971% ( 8) 00:12:24.866 12.183 - 12.235: 0.5592% ( 15) 00:12:24.866 12.235 - 12.286: 1.2408% ( 39) 00:12:24.866 12.286 - 12.337: 2.4117% ( 67) 00:12:24.866 12.337 - 12.389: 3.9147% ( 86) 00:12:24.866 12.389 - 12.440: 5.5225% ( 92) 00:12:24.866 12.440 - 12.492: 6.9032% ( 79) 00:12:24.866 12.492 - 12.543: 8.1964% ( 74) 00:12:24.866 12.543 - 12.594: 9.5421% ( 77) 00:12:24.866 12.594 - 12.646: 10.9402% ( 80) 00:12:24.866 12.646 - 12.697: 12.1111% ( 67) 00:12:24.866 12.697 - 12.749: 13.1772% ( 61) 00:12:24.866 12.749 - 12.800: 13.8588% ( 39) 00:12:24.866 12.800 - 12.851: 14.4355% ( 33) 00:12:24.866 12.851 - 12.903: 15.2220% ( 45) 00:12:24.866 12.903 - 12.954: 15.5890% ( 21) 00:12:24.866 12.954 - 13.006: 15.9385% ( 20) 00:12:24.866 13.006 - 13.057: 16.2006% ( 15) 00:12:24.866 13.057 - 13.108: 16.7599% ( 32) 00:12:24.866 13.108 - 13.160: 18.0706% ( 75) 00:12:24.866 13.160 - 13.263: 23.8378% ( 330) 00:12:24.866 13.263 - 13.365: 32.4886% ( 495) 00:12:24.866 13.365 - 13.468: 40.7375% ( 472) 00:12:24.866 13.468 - 13.571: 48.0601% ( 419) 00:12:24.866 13.571 - 13.674: 54.4390% ( 365) 00:12:24.866 13.674 - 13.777: 59.2625% ( 276) 00:12:24.866 13.777 - 13.880: 62.8102% ( 203) 00:12:24.866 13.880 - 13.982: 65.6763% ( 164) 00:12:24.866 13.982 - 14.085: 67.9657% ( 131) 00:12:24.866 14.085 - 14.188: 69.4862% ( 87) 00:12:24.866 14.188 - 14.291: 70.6571% ( 67) 00:12:24.866 14.291 - 14.394: 71.5309% ( 50) 00:12:24.866 14.394 - 14.496: 72.1251% ( 34) 00:12:24.866 14.496 - 14.599: 72.8242% ( 40) 00:12:24.866 14.599 - 14.702: 73.2436% ( 24) 00:12:24.866 14.702 - 14.805: 73.7854% ( 31) 00:12:24.866 14.805 - 14.908: 74.2922% ( 29) 00:12:24.866 14.908 - 15.010: 74.8864% ( 34) 00:12:24.866 15.010 - 15.113: 75.3408% ( 26) 00:12:24.866 15.113 - 15.216: 75.7602% ( 24) 00:12:24.866 15.216 - 15.319: 76.0224% ( 15) 00:12:24.866 15.319 - 15.422: 76.3020% ( 16) 00:12:24.866 15.422 - 15.524: 76.5816% ( 16) 00:12:24.866 15.524 - 15.627: 76.9137% ( 19) 00:12:24.866 15.627 - 15.730: 77.0535% ( 8) 00:12:24.866 15.730 - 15.833: 77.1758% ( 7) 00:12:24.866 15.833 - 15.936: 77.2457% ( 4) 00:12:24.866 15.936 - 16.039: 77.3681% ( 7) 00:12:24.866 16.039 - 16.141: 77.5603% ( 11) 00:12:24.866 16.141 - 16.244: 77.7001% ( 8) 00:12:24.866 16.244 - 16.347: 77.8050% ( 6) 00:12:24.866 16.347 - 16.450: 78.0496% ( 14) 00:12:24.866 16.450 - 16.553: 78.2943% ( 14) 00:12:24.866 16.553 - 16.655: 78.5564% ( 15) 00:12:24.866 16.655 - 16.758: 78.7487% ( 11) 00:12:24.866 16.758 - 16.861: 79.1681% ( 24) 00:12:24.866 16.861 - 16.964: 79.5351% ( 21) 00:12:24.866 16.964 - 17.067: 79.9720% ( 25) 00:12:24.866 17.067 - 17.169: 80.4614% ( 28) 00:12:24.866 17.169 - 17.272: 81.1954% ( 42) 00:12:24.866 17.272 - 17.375: 81.7896% ( 34) 00:12:24.866 17.375 - 17.478: 82.5061% ( 41) 00:12:24.866 17.478 - 17.581: 83.0129% ( 29) 00:12:24.866 17.581 - 17.684: 83.6596% ( 37) 00:12:24.866 17.684 - 17.786: 84.2363% ( 33) 00:12:24.866 17.786 - 17.889: 84.8305% ( 34) 00:12:24.866 17.889 - 17.992: 85.4771% ( 37) 00:12:24.866 17.992 - 18.095: 85.9839% ( 29) 00:12:24.866 18.095 - 18.198: 86.6305% ( 37) 00:12:24.866 18.198 - 18.300: 87.1374% ( 29) 00:12:24.866 18.300 - 18.403: 87.8189% ( 39) 00:12:24.866 18.403 - 18.506: 88.3432% ( 30) 00:12:24.866 18.506 - 18.609: 88.7976% ( 26) 00:12:24.866 18.609 - 18.712: 89.4792% ( 39) 00:12:24.866 18.712 - 18.814: 89.9336% ( 26) 00:12:24.866 18.814 - 18.917: 90.3880% ( 26) 00:12:24.866 18.917 - 19.020: 90.8074% ( 24) 00:12:24.866 19.020 - 19.123: 91.2268% ( 24) 00:12:24.866 19.123 - 19.226: 91.6638% ( 25) 00:12:24.866 19.226 - 19.329: 92.1531% ( 28) 00:12:24.866 19.329 - 19.431: 92.5725% ( 24) 00:12:24.866 19.431 - 19.534: 93.0619% ( 28) 00:12:24.866 19.534 - 19.637: 93.4988% ( 25) 00:12:24.866 19.637 - 19.740: 93.9357% ( 25) 00:12:24.866 19.740 - 19.843: 94.2677% ( 19) 00:12:24.866 19.843 - 19.945: 94.5998% ( 19) 00:12:24.866 19.945 - 20.048: 94.8270% ( 13) 00:12:24.866 20.048 - 20.151: 95.0192% ( 11) 00:12:24.866 20.151 - 20.254: 95.3338% ( 18) 00:12:24.866 20.254 - 20.357: 95.5959% ( 15) 00:12:24.866 20.357 - 20.459: 95.8756% ( 16) 00:12:24.866 20.459 - 20.562: 96.0853% ( 12) 00:12:24.866 20.562 - 20.665: 96.2950% ( 12) 00:12:24.866 20.665 - 20.768: 96.3649% ( 4) 00:12:24.866 20.768 - 20.871: 96.6096% ( 14) 00:12:24.866 20.871 - 20.973: 96.7319% ( 7) 00:12:24.866 20.973 - 21.076: 96.8193% ( 5) 00:12:24.866 21.076 - 21.179: 96.9416% ( 7) 00:12:24.866 21.179 - 21.282: 97.0640% ( 7) 00:12:24.866 21.282 - 21.385: 97.2213% ( 9) 00:12:24.866 21.385 - 21.488: 97.3086% ( 5) 00:12:24.866 21.488 - 21.590: 97.3785% ( 4) 00:12:24.866 21.590 - 21.693: 97.5533% ( 10) 00:12:24.866 21.693 - 21.796: 97.6232% ( 4) 00:12:24.866 21.796 - 21.899: 97.6931% ( 4) 00:12:24.866 21.899 - 22.002: 97.7805% ( 5) 00:12:24.866 22.002 - 22.104: 97.8329% ( 3) 00:12:24.866 22.104 - 22.207: 97.8854% ( 3) 00:12:24.866 22.207 - 22.310: 97.9902% ( 6) 00:12:24.866 22.310 - 22.413: 98.0252% ( 2) 00:12:24.866 22.413 - 22.516: 98.0951% ( 4) 00:12:24.866 22.516 - 22.618: 98.1475% ( 3) 00:12:24.866 22.618 - 22.721: 98.1999% ( 3) 00:12:24.866 22.721 - 22.824: 98.2349% ( 2) 00:12:24.866 22.824 - 22.927: 98.2873% ( 3) 00:12:24.866 22.927 - 23.030: 98.3572% ( 4) 00:12:24.866 23.030 - 23.133: 98.3747% ( 1) 00:12:24.866 23.133 - 23.235: 98.4271% ( 3) 00:12:24.866 23.235 - 23.338: 98.4446% ( 1) 00:12:24.866 23.441 - 23.544: 98.4796% ( 2) 00:12:24.866 23.544 - 23.647: 98.5145% ( 2) 00:12:24.866 23.955 - 24.058: 98.5320% ( 1) 00:12:24.866 24.058 - 24.161: 98.5495% ( 1) 00:12:24.866 24.263 - 24.366: 98.5669% ( 1) 00:12:24.866 24.366 - 24.469: 98.6194% ( 3) 00:12:24.866 24.778 - 24.880: 98.6368% ( 1) 00:12:24.866 24.880 - 24.983: 98.6543% ( 1) 00:12:24.866 24.983 - 25.086: 98.6893% ( 2) 00:12:24.866 25.086 - 25.189: 98.7242% ( 2) 00:12:24.866 25.497 - 25.600: 98.7417% ( 1) 00:12:24.866 25.703 - 25.806: 98.7592% ( 1) 00:12:24.866 26.011 - 26.114: 98.8291% ( 4) 00:12:24.866 26.217 - 26.320: 98.8640% ( 2) 00:12:24.866 26.320 - 26.525: 98.9165% ( 3) 00:12:24.866 26.525 - 26.731: 98.9514% ( 2) 00:12:24.866 26.731 - 26.937: 98.9689% ( 1) 00:12:24.866 26.937 - 27.142: 99.0388% ( 4) 00:12:24.866 27.348 - 27.553: 99.0563% ( 1) 00:12:24.866 27.553 - 27.759: 99.0738% ( 1) 00:12:24.866 27.759 - 27.965: 99.1437% ( 4) 00:12:24.866 27.965 - 28.170: 99.1786% ( 2) 00:12:24.866 28.170 - 28.376: 99.1961% ( 1) 00:12:24.866 28.376 - 28.582: 99.2136% ( 1) 00:12:24.866 28.582 - 28.787: 99.2660% ( 3) 00:12:24.866 28.787 - 28.993: 99.2835% ( 1) 00:12:24.866 28.993 - 29.198: 99.3184% ( 2) 00:12:24.866 29.198 - 29.404: 99.3534% ( 2) 00:12:24.866 29.404 - 29.610: 99.3883% ( 2) 00:12:24.866 29.610 - 29.815: 99.4233% ( 2) 00:12:24.866 29.815 - 30.021: 99.4932% ( 4) 00:12:24.866 30.021 - 30.227: 99.5281% ( 2) 00:12:24.866 30.432 - 30.638: 99.5456% ( 1) 00:12:24.866 30.638 - 30.843: 99.5980% ( 3) 00:12:24.866 31.255 - 31.460: 99.6155% ( 1) 00:12:24.867 31.666 - 31.871: 99.6505% ( 2) 00:12:24.867 31.871 - 32.077: 99.6854% ( 2) 00:12:24.867 32.077 - 32.283: 99.7029% ( 1) 00:12:24.867 32.488 - 32.694: 99.7379% ( 2) 00:12:24.867 33.516 - 33.722: 99.7903% ( 3) 00:12:24.867 33.722 - 33.928: 99.8078% ( 1) 00:12:24.867 34.545 - 34.750: 99.8252% ( 1) 00:12:24.867 34.956 - 35.161: 99.8602% ( 2) 00:12:24.867 36.190 - 36.395: 99.8777% ( 1) 00:12:24.867 36.806 - 37.012: 99.8951% ( 1) 00:12:24.867 39.685 - 39.891: 99.9126% ( 1) 00:12:24.867 40.919 - 41.124: 99.9301% ( 1) 00:12:24.867 42.564 - 42.769: 99.9476% ( 1) 00:12:24.867 50.378 - 50.583: 99.9650% ( 1) 00:12:24.867 51.817 - 52.022: 99.9825% ( 1) 00:12:24.867 122.551 - 123.373: 100.0000% ( 1) 00:12:24.867 00:12:24.867 Complete histogram 00:12:24.867 ================== 00:12:24.867 Range in us Cumulative Count 00:12:24.867 7.916 - 7.968: 0.3146% ( 18) 00:12:24.867 7.968 - 8.019: 2.0972% ( 102) 00:12:24.867 8.019 - 8.071: 6.5886% ( 257) 00:12:24.867 8.071 - 8.122: 13.6141% ( 402) 00:12:24.867 8.122 - 8.173: 21.6533% ( 460) 00:12:24.867 8.173 - 8.225: 28.8186% ( 410) 00:12:24.867 8.225 - 8.276: 33.8693% ( 289) 00:12:24.867 8.276 - 8.328: 38.5704% ( 269) 00:12:24.867 8.328 - 8.379: 41.7686% ( 183) 00:12:24.867 8.379 - 8.431: 44.1629% ( 137) 00:12:24.867 8.431 - 8.482: 46.2600% ( 120) 00:12:24.867 8.482 - 8.533: 48.1300% ( 107) 00:12:24.867 8.533 - 8.585: 49.6505% ( 87) 00:12:24.867 8.585 - 8.636: 50.8389% ( 68) 00:12:24.867 8.636 - 8.688: 52.4467% ( 92) 00:12:24.867 8.688 - 8.739: 54.5439% ( 120) 00:12:24.867 8.739 - 8.790: 56.0993% ( 89) 00:12:24.867 8.790 - 8.842: 57.8993% ( 103) 00:12:24.867 8.842 - 8.893: 59.9441% ( 117) 00:12:24.867 8.893 - 8.945: 61.9713% ( 116) 00:12:24.867 8.945 - 8.996: 64.1559% ( 125) 00:12:24.867 8.996 - 9.047: 65.7987% ( 94) 00:12:24.867 9.047 - 9.099: 67.1094% ( 75) 00:12:24.867 9.099 - 9.150: 68.1929% ( 62) 00:12:24.867 9.150 - 9.202: 69.3464% ( 66) 00:12:24.867 9.202 - 9.253: 70.2377% ( 51) 00:12:24.867 9.253 - 9.304: 71.0940% ( 49) 00:12:24.867 9.304 - 9.356: 71.6882% ( 34) 00:12:24.867 9.356 - 9.407: 72.1251% ( 25) 00:12:24.867 9.407 - 9.459: 72.5620% ( 25) 00:12:24.867 9.459 - 9.510: 72.8242% ( 15) 00:12:24.867 9.510 - 9.561: 73.0689% ( 14) 00:12:24.867 9.561 - 9.613: 73.3310% ( 15) 00:12:24.867 9.613 - 9.664: 73.4184% ( 5) 00:12:24.867 9.664 - 9.716: 73.6456% ( 13) 00:12:24.867 9.716 - 9.767: 74.0126% ( 21) 00:12:24.867 9.767 - 9.818: 74.1349% ( 7) 00:12:24.867 9.818 - 9.870: 74.2398% ( 6) 00:12:24.867 9.870 - 9.921: 74.3446% ( 6) 00:12:24.867 9.921 - 9.973: 74.4495% ( 6) 00:12:24.867 9.973 - 10.024: 74.5194% ( 4) 00:12:24.867 10.024 - 10.076: 74.7641% ( 14) 00:12:24.867 10.076 - 10.127: 74.8165% ( 3) 00:12:24.867 10.127 - 10.178: 75.0437% ( 13) 00:12:24.867 10.178 - 10.230: 75.2010% ( 9) 00:12:24.867 10.230 - 10.281: 75.3058% ( 6) 00:12:24.867 10.281 - 10.333: 75.3408% ( 2) 00:12:24.867 10.333 - 10.384: 75.4107% ( 4) 00:12:24.867 10.384 - 10.435: 75.5505% ( 8) 00:12:24.867 10.435 - 10.487: 75.6379% ( 5) 00:12:24.867 10.487 - 10.538: 75.8826% ( 14) 00:12:24.867 10.538 - 10.590: 76.0748% ( 11) 00:12:24.867 10.590 - 10.641: 76.2496% ( 10) 00:12:24.867 10.641 - 10.692: 76.5816% ( 19) 00:12:24.867 10.692 - 10.744: 76.6865% ( 6) 00:12:24.867 10.744 - 10.795: 76.9836% ( 17) 00:12:24.867 10.795 - 10.847: 77.2108% ( 13) 00:12:24.867 10.847 - 10.898: 77.5952% ( 22) 00:12:24.867 10.898 - 10.949: 78.0147% ( 24) 00:12:24.867 10.949 - 11.001: 78.4341% ( 24) 00:12:24.867 11.001 - 11.052: 78.8361% ( 23) 00:12:24.867 11.052 - 11.104: 79.0807% ( 14) 00:12:24.867 11.104 - 11.155: 79.4827% ( 23) 00:12:24.867 11.155 - 11.206: 79.9196% ( 25) 00:12:24.867 11.206 - 11.258: 80.3041% ( 22) 00:12:24.867 11.258 - 11.309: 80.7235% ( 24) 00:12:24.867 11.309 - 11.361: 81.2478% ( 30) 00:12:24.867 11.361 - 11.412: 81.7022% ( 26) 00:12:24.867 11.412 - 11.463: 82.0168% ( 18) 00:12:24.867 11.463 - 11.515: 82.4712% ( 26) 00:12:24.867 11.515 - 11.566: 82.8556% ( 22) 00:12:24.867 11.566 - 11.618: 83.3100% ( 26) 00:12:24.867 11.618 - 11.669: 83.7469% ( 25) 00:12:24.867 11.669 - 11.720: 84.2013% ( 26) 00:12:24.867 11.720 - 11.772: 84.6382% ( 25) 00:12:24.867 11.772 - 11.823: 85.0577% ( 24) 00:12:24.867 11.823 - 11.875: 85.4422% ( 22) 00:12:24.867 11.875 - 11.926: 85.7742% ( 19) 00:12:24.867 11.926 - 11.978: 86.3334% ( 32) 00:12:24.867 11.978 - 12.029: 86.6655% ( 19) 00:12:24.867 12.029 - 12.080: 86.9276% ( 15) 00:12:24.867 12.080 - 12.132: 87.2422% ( 18) 00:12:24.867 12.132 - 12.183: 87.6966% ( 26) 00:12:24.867 12.183 - 12.235: 88.1510% ( 26) 00:12:24.867 12.235 - 12.286: 88.3782% ( 13) 00:12:24.867 12.286 - 12.337: 88.7277% ( 20) 00:12:24.867 12.337 - 12.389: 89.1122% ( 22) 00:12:24.867 12.389 - 12.440: 89.4443% ( 19) 00:12:24.867 12.440 - 12.492: 89.6714% ( 13) 00:12:24.867 12.492 - 12.543: 89.8113% ( 8) 00:12:24.867 12.543 - 12.594: 90.0210% ( 12) 00:12:24.867 12.594 - 12.646: 90.2656% ( 14) 00:12:24.867 12.646 - 12.697: 90.4754% ( 12) 00:12:24.867 12.697 - 12.749: 90.7725% ( 17) 00:12:24.867 12.749 - 12.800: 90.9472% ( 10) 00:12:24.867 12.800 - 12.851: 91.1919% ( 14) 00:12:24.867 12.851 - 12.903: 91.3841% ( 11) 00:12:24.867 12.903 - 12.954: 91.5589% ( 10) 00:12:24.867 12.954 - 13.006: 91.7162% ( 9) 00:12:24.867 13.006 - 13.057: 91.9259% ( 12) 00:12:24.867 13.057 - 13.108: 92.0657% ( 8) 00:12:24.867 13.108 - 13.160: 92.1706% ( 6) 00:12:24.867 13.160 - 13.263: 92.3803% ( 12) 00:12:24.867 13.263 - 13.365: 92.7648% ( 22) 00:12:24.867 13.365 - 13.468: 93.0793% ( 18) 00:12:24.867 13.468 - 13.571: 93.4463% ( 21) 00:12:24.867 13.571 - 13.674: 93.8134% ( 21) 00:12:24.867 13.674 - 13.777: 94.0755% ( 15) 00:12:24.867 13.777 - 13.880: 94.2852% ( 12) 00:12:24.867 13.880 - 13.982: 94.6173% ( 19) 00:12:24.867 13.982 - 14.085: 94.8445% ( 13) 00:12:24.867 14.085 - 14.188: 95.0891% ( 14) 00:12:24.867 14.188 - 14.291: 95.2988% ( 12) 00:12:24.867 14.291 - 14.394: 95.6134% ( 18) 00:12:24.867 14.394 - 14.496: 95.8231% ( 12) 00:12:24.868 14.496 - 14.599: 96.0154% ( 11) 00:12:24.868 14.599 - 14.702: 96.3125% ( 17) 00:12:24.868 14.702 - 14.805: 96.4173% ( 6) 00:12:24.868 14.805 - 14.908: 96.6096% ( 11) 00:12:24.868 14.908 - 15.010: 96.7494% ( 8) 00:12:24.868 15.010 - 15.113: 96.8193% ( 4) 00:12:24.868 15.113 - 15.216: 96.9416% ( 7) 00:12:24.868 15.216 - 15.319: 97.0640% ( 7) 00:12:24.868 15.319 - 15.422: 97.0989% ( 2) 00:12:24.868 15.422 - 15.524: 97.2038% ( 6) 00:12:24.868 15.524 - 15.627: 97.2912% ( 5) 00:12:24.868 15.627 - 15.730: 97.3436% ( 3) 00:12:24.868 15.730 - 15.833: 97.4310% ( 5) 00:12:24.868 15.833 - 15.936: 97.4834% ( 3) 00:12:24.868 15.936 - 16.039: 97.5358% ( 3) 00:12:24.868 16.039 - 16.141: 97.5883% ( 3) 00:12:24.868 16.141 - 16.244: 97.6756% ( 5) 00:12:24.868 16.244 - 16.347: 97.7281% ( 3) 00:12:24.868 16.347 - 16.450: 97.7805% ( 3) 00:12:24.868 16.450 - 16.553: 97.8504% ( 4) 00:12:24.868 16.553 - 16.655: 97.8854% ( 2) 00:12:24.868 16.758 - 16.861: 97.9028% ( 1) 00:12:24.868 16.861 - 16.964: 97.9553% ( 3) 00:12:24.868 16.964 - 17.067: 98.0077% ( 3) 00:12:24.868 17.067 - 17.169: 98.0426% ( 2) 00:12:24.868 17.272 - 17.375: 98.0951% ( 3) 00:12:24.868 17.375 - 17.478: 98.1300% ( 2) 00:12:24.868 17.478 - 17.581: 98.1825% ( 3) 00:12:24.868 17.581 - 17.684: 98.2174% ( 2) 00:12:24.868 17.684 - 17.786: 98.2524% ( 2) 00:12:24.868 17.786 - 17.889: 98.3048% ( 3) 00:12:24.868 17.889 - 17.992: 98.3572% ( 3) 00:12:24.868 17.992 - 18.095: 98.4096% ( 3) 00:12:24.868 18.095 - 18.198: 98.4271% ( 1) 00:12:24.868 18.198 - 18.300: 98.5495% ( 7) 00:12:24.868 18.300 - 18.403: 98.6368% ( 5) 00:12:24.868 18.403 - 18.506: 98.6543% ( 1) 00:12:24.868 18.506 - 18.609: 98.6893% ( 2) 00:12:24.868 18.609 - 18.712: 98.7067% ( 1) 00:12:24.868 18.814 - 18.917: 98.7592% ( 3) 00:12:24.868 18.917 - 19.020: 98.8291% ( 4) 00:12:24.868 19.020 - 19.123: 98.8466% ( 1) 00:12:24.868 19.123 - 19.226: 98.9165% ( 4) 00:12:24.868 19.329 - 19.431: 98.9339% ( 1) 00:12:24.868 19.534 - 19.637: 98.9689% ( 2) 00:12:24.868 19.740 - 19.843: 98.9864% ( 1) 00:12:24.868 19.945 - 20.048: 99.0038% ( 1) 00:12:24.868 20.048 - 20.151: 99.0213% ( 1) 00:12:24.868 20.254 - 20.357: 99.0388% ( 1) 00:12:24.868 20.357 - 20.459: 99.0563% ( 1) 00:12:24.868 20.562 - 20.665: 99.0738% ( 1) 00:12:24.868 20.665 - 20.768: 99.0912% ( 1) 00:12:24.868 20.768 - 20.871: 99.1262% ( 2) 00:12:24.868 20.871 - 20.973: 99.1437% ( 1) 00:12:24.868 20.973 - 21.076: 99.1786% ( 2) 00:12:24.868 21.076 - 21.179: 99.2136% ( 2) 00:12:24.868 21.282 - 21.385: 99.2485% ( 2) 00:12:24.868 21.385 - 21.488: 99.2835% ( 2) 00:12:24.868 21.693 - 21.796: 99.3009% ( 1) 00:12:24.868 21.796 - 21.899: 99.3359% ( 2) 00:12:24.868 21.899 - 22.002: 99.3534% ( 1) 00:12:24.868 22.104 - 22.207: 99.3708% ( 1) 00:12:24.868 22.516 - 22.618: 99.3883% ( 1) 00:12:24.868 22.618 - 22.721: 99.4058% ( 1) 00:12:24.868 23.030 - 23.133: 99.4408% ( 2) 00:12:24.868 23.133 - 23.235: 99.4582% ( 1) 00:12:24.868 23.441 - 23.544: 99.4757% ( 1) 00:12:24.868 23.749 - 23.852: 99.4932% ( 1) 00:12:24.868 23.955 - 24.058: 99.5107% ( 1) 00:12:24.868 24.058 - 24.161: 99.5281% ( 1) 00:12:24.868 24.263 - 24.366: 99.5456% ( 1) 00:12:24.868 25.086 - 25.189: 99.5631% ( 1) 00:12:24.868 25.394 - 25.497: 99.5980% ( 2) 00:12:24.868 25.703 - 25.806: 99.6330% ( 2) 00:12:24.868 25.806 - 25.908: 99.6679% ( 2) 00:12:24.868 26.217 - 26.320: 99.6854% ( 1) 00:12:24.868 26.525 - 26.731: 99.7379% ( 3) 00:12:24.868 26.731 - 26.937: 99.7553% ( 1) 00:12:24.868 27.142 - 27.348: 99.7903% ( 2) 00:12:24.868 28.376 - 28.582: 99.8078% ( 1) 00:12:24.868 28.993 - 29.198: 99.8252% ( 1) 00:12:24.868 29.404 - 29.610: 99.8427% ( 1) 00:12:24.868 30.021 - 30.227: 99.8602% ( 1) 00:12:24.868 32.077 - 32.283: 99.8777% ( 1) 00:12:24.868 35.573 - 35.778: 99.8951% ( 1) 00:12:24.868 36.601 - 36.806: 99.9126% ( 1) 00:12:24.868 38.246 - 38.451: 99.9301% ( 1) 00:12:24.868 39.685 - 39.891: 99.9476% ( 1) 00:12:24.868 47.704 - 47.910: 99.9650% ( 1) 00:12:24.868 54.284 - 54.696: 99.9825% ( 1) 00:12:24.868 65.799 - 66.210: 100.0000% ( 1) 00:12:24.868 00:12:24.868 00:12:24.868 real 0m1.315s 00:12:24.868 user 0m1.108s 00:12:24.868 sys 0m0.159s 00:12:24.868 20:30:32 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.868 ************************************ 00:12:24.868 20:30:32 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:24.868 END TEST nvme_overhead 00:12:24.868 ************************************ 00:12:24.868 20:30:32 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:24.868 20:30:32 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:24.868 20:30:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.868 20:30:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.868 ************************************ 00:12:24.868 START TEST nvme_arbitration 00:12:24.868 ************************************ 00:12:24.868 20:30:32 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:29.057 Initializing NVMe Controllers 00:12:29.057 Attached to 0000:00:10.0 00:12:29.057 Attached to 0000:00:11.0 00:12:29.057 Attached to 0000:00:13.0 00:12:29.057 Attached to 0000:00:12.0 00:12:29.057 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:29.057 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:29.057 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:29.057 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:29.057 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:29.057 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:29.057 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:29.057 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:29.057 Initialization complete. Launching workers. 00:12:29.057 Starting thread on core 1 with urgent priority queue 00:12:29.057 Starting thread on core 2 with urgent priority queue 00:12:29.057 Starting thread on core 3 with urgent priority queue 00:12:29.057 Starting thread on core 0 with urgent priority queue 00:12:29.057 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:12:29.057 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:12:29.057 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:12:29.057 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:12:29.057 QEMU NVMe Ctrl (12343 ) core 2: 640.00 IO/s 156.25 secs/100000 ios 00:12:29.057 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:12:29.057 ======================================================== 00:12:29.057 00:12:29.057 00:12:29.057 real 0m3.471s 00:12:29.057 user 0m9.430s 00:12:29.057 sys 0m0.179s 00:12:29.057 20:30:36 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.057 20:30:36 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:29.057 ************************************ 00:12:29.057 END TEST nvme_arbitration 00:12:29.057 ************************************ 00:12:29.057 20:30:36 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:29.057 20:30:36 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.057 20:30:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.057 20:30:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.057 ************************************ 00:12:29.057 START TEST nvme_single_aen 00:12:29.057 ************************************ 00:12:29.057 20:30:36 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:29.057 Asynchronous Event Request test 00:12:29.057 Attached to 0000:00:10.0 00:12:29.057 Attached to 0000:00:11.0 00:12:29.057 Attached to 0000:00:13.0 00:12:29.057 Attached to 0000:00:12.0 00:12:29.057 Reset controller to setup AER completions for this process 00:12:29.057 Registering asynchronous event callbacks... 00:12:29.057 Getting orig temperature thresholds of all controllers 00:12:29.057 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:29.057 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:29.057 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:29.057 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:29.057 Setting all controllers temperature threshold low to trigger AER 00:12:29.057 Waiting for all controllers temperature threshold to be set lower 00:12:29.057 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:29.057 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:29.057 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:29.057 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:29.057 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:29.057 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:29.057 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:29.057 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:29.057 Waiting for all controllers to trigger AER and reset threshold 00:12:29.057 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:29.057 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:29.057 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:29.057 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:29.057 Cleaning up... 00:12:29.057 00:12:29.057 real 0m0.317s 00:12:29.057 user 0m0.102s 00:12:29.057 sys 0m0.163s 00:12:29.057 20:30:36 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.057 ************************************ 00:12:29.057 END TEST nvme_single_aen 00:12:29.057 ************************************ 00:12:29.057 20:30:36 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:29.057 20:30:36 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:29.057 20:30:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:29.057 20:30:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.057 20:30:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.057 ************************************ 00:12:29.057 START TEST nvme_doorbell_aers 00:12:29.057 ************************************ 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:29.057 20:30:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:29.057 20:30:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:29.057 20:30:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:29.057 20:30:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:29.057 20:30:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:29.315 [2024-11-25 20:30:37.329748] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:12:39.298 Executing: test_write_invalid_db 00:12:39.298 Waiting for AER completion... 00:12:39.298 Failure: test_write_invalid_db 00:12:39.298 00:12:39.298 Executing: test_invalid_db_write_overflow_sq 00:12:39.298 Waiting for AER completion... 00:12:39.298 Failure: test_invalid_db_write_overflow_sq 00:12:39.298 00:12:39.298 Executing: test_invalid_db_write_overflow_cq 00:12:39.298 Waiting for AER completion... 00:12:39.298 Failure: test_invalid_db_write_overflow_cq 00:12:39.298 00:12:39.298 20:30:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:39.298 20:30:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:39.298 [2024-11-25 20:30:47.412579] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:12:49.307 Executing: test_write_invalid_db 00:12:49.307 Waiting for AER completion... 00:12:49.307 Failure: test_write_invalid_db 00:12:49.307 00:12:49.307 Executing: test_invalid_db_write_overflow_sq 00:12:49.307 Waiting for AER completion... 00:12:49.307 Failure: test_invalid_db_write_overflow_sq 00:12:49.307 00:12:49.307 Executing: test_invalid_db_write_overflow_cq 00:12:49.307 Waiting for AER completion... 00:12:49.307 Failure: test_invalid_db_write_overflow_cq 00:12:49.307 00:12:49.307 20:30:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:49.307 20:30:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:49.565 [2024-11-25 20:30:57.454108] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:12:59.529 Executing: test_write_invalid_db 00:12:59.529 Waiting for AER completion... 00:12:59.529 Failure: test_write_invalid_db 00:12:59.529 00:12:59.529 Executing: test_invalid_db_write_overflow_sq 00:12:59.529 Waiting for AER completion... 00:12:59.529 Failure: test_invalid_db_write_overflow_sq 00:12:59.529 00:12:59.529 Executing: test_invalid_db_write_overflow_cq 00:12:59.529 Waiting for AER completion... 00:12:59.529 Failure: test_invalid_db_write_overflow_cq 00:12:59.529 00:12:59.529 20:31:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:59.529 20:31:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:59.529 [2024-11-25 20:31:07.515799] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.498 Executing: test_write_invalid_db 00:13:09.498 Waiting for AER completion... 00:13:09.498 Failure: test_write_invalid_db 00:13:09.498 00:13:09.498 Executing: test_invalid_db_write_overflow_sq 00:13:09.498 Waiting for AER completion... 00:13:09.498 Failure: test_invalid_db_write_overflow_sq 00:13:09.498 00:13:09.498 Executing: test_invalid_db_write_overflow_cq 00:13:09.498 Waiting for AER completion... 00:13:09.498 Failure: test_invalid_db_write_overflow_cq 00:13:09.498 00:13:09.498 00:13:09.498 real 0m40.365s 00:13:09.498 user 0m28.676s 00:13:09.498 sys 0m11.303s 00:13:09.498 20:31:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.498 20:31:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:09.498 ************************************ 00:13:09.498 END TEST nvme_doorbell_aers 00:13:09.498 ************************************ 00:13:09.498 20:31:17 nvme -- nvme/nvme.sh@97 -- # uname 00:13:09.498 20:31:17 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:09.498 20:31:17 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:09.498 20:31:17 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:09.498 20:31:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.498 20:31:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.498 ************************************ 00:13:09.498 START TEST nvme_multi_aen 00:13:09.498 ************************************ 00:13:09.498 20:31:17 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:09.756 [2024-11-25 20:31:17.633439] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.633564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.633582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.635435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.635482] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.635497] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.636941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.636980] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.636994] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.638423] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.756 [2024-11-25 20:31:17.638463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.757 [2024-11-25 20:31:17.638478] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64734) is not found. Dropping the request. 00:13:09.757 Child process pid: 65251 00:13:10.015 [Child] Asynchronous Event Request test 00:13:10.015 [Child] Attached to 0000:00:10.0 00:13:10.015 [Child] Attached to 0000:00:11.0 00:13:10.015 [Child] Attached to 0000:00:13.0 00:13:10.015 [Child] Attached to 0000:00:12.0 00:13:10.015 [Child] Registering asynchronous event callbacks... 00:13:10.015 [Child] Getting orig temperature thresholds of all controllers 00:13:10.015 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:10.015 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 [Child] Cleaning up... 00:13:10.015 Asynchronous Event Request test 00:13:10.015 Attached to 0000:00:10.0 00:13:10.015 Attached to 0000:00:11.0 00:13:10.015 Attached to 0000:00:13.0 00:13:10.015 Attached to 0000:00:12.0 00:13:10.015 Reset controller to setup AER completions for this process 00:13:10.015 Registering asynchronous event callbacks... 00:13:10.015 Getting orig temperature thresholds of all controllers 00:13:10.015 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:10.015 Setting all controllers temperature threshold low to trigger AER 00:13:10.015 Waiting for all controllers temperature threshold to be set lower 00:13:10.015 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:10.015 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:10.015 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:10.015 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:10.015 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:10.015 Waiting for all controllers to trigger AER and reset threshold 00:13:10.015 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:10.015 Cleaning up... 00:13:10.015 00:13:10.015 real 0m0.645s 00:13:10.015 user 0m0.224s 00:13:10.015 sys 0m0.312s 00:13:10.015 20:31:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.015 20:31:18 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:10.015 ************************************ 00:13:10.015 END TEST nvme_multi_aen 00:13:10.015 ************************************ 00:13:10.015 20:31:18 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:10.015 20:31:18 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:10.015 20:31:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.015 20:31:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.015 ************************************ 00:13:10.015 START TEST nvme_startup 00:13:10.015 ************************************ 00:13:10.015 20:31:18 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:10.274 Initializing NVMe Controllers 00:13:10.274 Attached to 0000:00:10.0 00:13:10.274 Attached to 0000:00:11.0 00:13:10.274 Attached to 0000:00:13.0 00:13:10.274 Attached to 0000:00:12.0 00:13:10.274 Initialization complete. 00:13:10.274 Time used:198700.734 (us). 00:13:10.274 ************************************ 00:13:10.274 END TEST nvme_startup 00:13:10.274 ************************************ 00:13:10.274 00:13:10.274 real 0m0.298s 00:13:10.274 user 0m0.099s 00:13:10.274 sys 0m0.155s 00:13:10.274 20:31:18 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.274 20:31:18 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:10.536 20:31:18 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:10.536 20:31:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.536 20:31:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.536 20:31:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.536 ************************************ 00:13:10.536 START TEST nvme_multi_secondary 00:13:10.536 ************************************ 00:13:10.536 20:31:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:13:10.536 20:31:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65306 00:13:10.536 20:31:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:10.536 20:31:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65307 00:13:10.536 20:31:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:10.536 20:31:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:13.828 Initializing NVMe Controllers 00:13:13.828 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:13.828 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:13.828 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:13.828 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:13.828 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:13.828 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:13.828 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:13.828 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:13.829 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:13.829 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:13.829 Initialization complete. Launching workers. 00:13:13.829 ======================================================== 00:13:13.829 Latency(us) 00:13:13.829 Device Information : IOPS MiB/s Average min max 00:13:13.829 PCIE (0000:00:10.0) NSID 1 from core 2: 3027.90 11.83 5282.50 1298.28 12898.54 00:13:13.829 PCIE (0000:00:11.0) NSID 1 from core 2: 3027.90 11.83 5277.42 1356.70 13356.22 00:13:13.829 PCIE (0000:00:13.0) NSID 1 from core 2: 3027.90 11.83 5276.99 1221.74 13722.48 00:13:13.829 PCIE (0000:00:12.0) NSID 1 from core 2: 3027.90 11.83 5277.16 1208.35 12012.87 00:13:13.829 PCIE (0000:00:12.0) NSID 2 from core 2: 3027.90 11.83 5277.36 1223.17 12674.88 00:13:13.829 PCIE (0000:00:12.0) NSID 3 from core 2: 3027.90 11.83 5277.34 1224.06 12901.12 00:13:13.829 ======================================================== 00:13:13.829 Total : 18167.42 70.97 5278.13 1208.35 13722.48 00:13:13.829 00:13:14.087 20:31:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65306 00:13:14.087 Initializing NVMe Controllers 00:13:14.087 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:14.087 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:14.087 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:14.087 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:14.087 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:14.087 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:14.087 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:14.087 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:14.087 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:14.087 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:14.087 Initialization complete. Launching workers. 00:13:14.087 ======================================================== 00:13:14.087 Latency(us) 00:13:14.087 Device Information : IOPS MiB/s Average min max 00:13:14.087 PCIE (0000:00:10.0) NSID 1 from core 1: 4527.51 17.69 3531.43 1809.88 7353.24 00:13:14.087 PCIE (0000:00:11.0) NSID 1 from core 1: 4527.51 17.69 3533.33 1702.33 7156.82 00:13:14.087 PCIE (0000:00:13.0) NSID 1 from core 1: 4527.51 17.69 3533.36 1809.77 7223.92 00:13:14.087 PCIE (0000:00:12.0) NSID 1 from core 1: 4527.51 17.69 3533.58 1685.24 8164.02 00:13:14.087 PCIE (0000:00:12.0) NSID 2 from core 1: 4527.51 17.69 3533.76 1685.46 7724.14 00:13:14.087 PCIE (0000:00:12.0) NSID 3 from core 1: 4527.51 17.69 3533.98 1835.32 7633.85 00:13:14.087 ======================================================== 00:13:14.087 Total : 27165.07 106.11 3533.24 1685.24 8164.02 00:13:14.087 00:13:15.992 Initializing NVMe Controllers 00:13:15.992 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:15.992 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:15.992 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:15.992 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:15.992 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:15.992 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:15.992 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:15.992 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:15.992 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:15.992 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:15.992 Initialization complete. Launching workers. 00:13:15.992 ======================================================== 00:13:15.992 Latency(us) 00:13:15.992 Device Information : IOPS MiB/s Average min max 00:13:15.992 PCIE (0000:00:10.0) NSID 1 from core 0: 7794.93 30.45 2050.99 941.51 8772.71 00:13:15.992 PCIE (0000:00:11.0) NSID 1 from core 0: 7794.93 30.45 2052.15 971.72 9142.81 00:13:15.992 PCIE (0000:00:13.0) NSID 1 from core 0: 7794.93 30.45 2052.11 959.63 9177.39 00:13:15.992 PCIE (0000:00:12.0) NSID 1 from core 0: 7794.93 30.45 2052.05 886.13 8555.13 00:13:15.992 PCIE (0000:00:12.0) NSID 2 from core 0: 7794.93 30.45 2052.00 836.10 8267.18 00:13:15.992 PCIE (0000:00:12.0) NSID 3 from core 0: 7794.93 30.45 2051.96 780.71 8868.14 00:13:15.992 ======================================================== 00:13:15.992 Total : 46769.58 182.69 2051.88 780.71 9177.39 00:13:15.992 00:13:15.992 20:31:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65307 00:13:15.992 20:31:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65376 00:13:15.992 20:31:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:15.992 20:31:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65377 00:13:15.992 20:31:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:15.992 20:31:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:19.322 Initializing NVMe Controllers 00:13:19.322 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.322 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:19.322 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:19.322 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:19.322 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:19.322 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:19.322 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:19.322 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:19.322 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:19.322 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:19.323 Initialization complete. Launching workers. 00:13:19.323 ======================================================== 00:13:19.323 Latency(us) 00:13:19.323 Device Information : IOPS MiB/s Average min max 00:13:19.323 PCIE (0000:00:10.0) NSID 1 from core 1: 5094.09 19.90 3138.68 956.22 7039.92 00:13:19.323 PCIE (0000:00:11.0) NSID 1 from core 1: 5094.09 19.90 3140.45 955.85 7843.95 00:13:19.323 PCIE (0000:00:13.0) NSID 1 from core 1: 5094.09 19.90 3140.78 982.77 7680.71 00:13:19.323 PCIE (0000:00:12.0) NSID 1 from core 1: 5094.09 19.90 3140.94 979.25 7692.96 00:13:19.323 PCIE (0000:00:12.0) NSID 2 from core 1: 5094.09 19.90 3141.11 972.52 7022.69 00:13:19.323 PCIE (0000:00:12.0) NSID 3 from core 1: 5094.09 19.90 3141.40 965.15 6986.81 00:13:19.323 ======================================================== 00:13:19.323 Total : 30564.54 119.39 3140.56 955.85 7843.95 00:13:19.323 00:13:19.323 Initializing NVMe Controllers 00:13:19.323 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.323 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:19.323 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:19.323 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:19.323 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:19.323 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:19.323 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:19.323 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:19.323 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:19.323 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:19.323 Initialization complete. Launching workers. 00:13:19.323 ======================================================== 00:13:19.323 Latency(us) 00:13:19.323 Device Information : IOPS MiB/s Average min max 00:13:19.323 PCIE (0000:00:10.0) NSID 1 from core 0: 5142.93 20.09 3108.75 987.84 9865.29 00:13:19.323 PCIE (0000:00:11.0) NSID 1 from core 0: 5142.93 20.09 3110.32 1006.90 9982.10 00:13:19.323 PCIE (0000:00:13.0) NSID 1 from core 0: 5142.93 20.09 3110.25 1015.75 9639.74 00:13:19.323 PCIE (0000:00:12.0) NSID 1 from core 0: 5142.93 20.09 3110.16 1013.94 9379.52 00:13:19.323 PCIE (0000:00:12.0) NSID 2 from core 0: 5142.93 20.09 3110.07 1035.26 8737.14 00:13:19.323 PCIE (0000:00:12.0) NSID 3 from core 0: 5142.93 20.09 3110.02 963.89 9437.13 00:13:19.323 ======================================================== 00:13:19.323 Total : 30857.57 120.54 3109.93 963.89 9982.10 00:13:19.323 00:13:21.238 Initializing NVMe Controllers 00:13:21.238 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:21.238 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:21.238 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:21.238 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:21.238 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:21.238 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:21.238 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:21.238 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:21.238 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:21.238 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:21.238 Initialization complete. Launching workers. 00:13:21.238 ======================================================== 00:13:21.238 Latency(us) 00:13:21.238 Device Information : IOPS MiB/s Average min max 00:13:21.238 PCIE (0000:00:10.0) NSID 1 from core 2: 3274.87 12.79 4884.56 1011.00 11207.09 00:13:21.238 PCIE (0000:00:11.0) NSID 1 from core 2: 3274.87 12.79 4885.48 1019.83 11605.56 00:13:21.238 PCIE (0000:00:13.0) NSID 1 from core 2: 3274.87 12.79 4885.40 1029.45 11551.98 00:13:21.238 PCIE (0000:00:12.0) NSID 1 from core 2: 3274.87 12.79 4885.32 1036.30 11324.83 00:13:21.238 PCIE (0000:00:12.0) NSID 2 from core 2: 3274.87 12.79 4885.25 1041.23 11337.60 00:13:21.238 PCIE (0000:00:12.0) NSID 3 from core 2: 3274.87 12.79 4885.16 1039.35 11530.58 00:13:21.238 ======================================================== 00:13:21.238 Total : 19649.25 76.75 4885.19 1011.00 11605.56 00:13:21.238 00:13:21.495 20:31:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65376 00:13:21.495 20:31:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65377 00:13:21.495 00:13:21.495 real 0m10.949s 00:13:21.495 user 0m18.590s 00:13:21.495 sys 0m1.004s 00:13:21.495 20:31:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.495 20:31:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:21.495 ************************************ 00:13:21.495 END TEST nvme_multi_secondary 00:13:21.495 ************************************ 00:13:21.495 20:31:29 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:21.495 20:31:29 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:21.495 20:31:29 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64315 ]] 00:13:21.495 20:31:29 nvme -- common/autotest_common.sh@1094 -- # kill 64315 00:13:21.495 20:31:29 nvme -- common/autotest_common.sh@1095 -- # wait 64315 00:13:21.495 [2024-11-25 20:31:29.452417] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.452561] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.452645] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.452700] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.459743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.460142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.460213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.460273] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.465480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.465770] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.495 [2024-11-25 20:31:29.465988] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.496 [2024-11-25 20:31:29.466181] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.496 [2024-11-25 20:31:29.470846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.496 [2024-11-25 20:31:29.471139] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.496 [2024-11-25 20:31:29.471527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.496 [2024-11-25 20:31:29.471735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65249) is not found. Dropping the request. 00:13:21.753 20:31:29 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:21.753 20:31:29 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:21.753 20:31:29 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:21.753 20:31:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.753 20:31:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.753 20:31:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.753 ************************************ 00:13:21.753 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:21.753 ************************************ 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:21.753 * Looking for test storage... 00:13:21.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:21.753 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.010 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:22.010 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:22.010 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.010 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:22.010 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.010 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:22.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.011 --rc genhtml_branch_coverage=1 00:13:22.011 --rc genhtml_function_coverage=1 00:13:22.011 --rc genhtml_legend=1 00:13:22.011 --rc geninfo_all_blocks=1 00:13:22.011 --rc geninfo_unexecuted_blocks=1 00:13:22.011 00:13:22.011 ' 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:22.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.011 --rc genhtml_branch_coverage=1 00:13:22.011 --rc genhtml_function_coverage=1 00:13:22.011 --rc genhtml_legend=1 00:13:22.011 --rc geninfo_all_blocks=1 00:13:22.011 --rc geninfo_unexecuted_blocks=1 00:13:22.011 00:13:22.011 ' 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:22.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.011 --rc genhtml_branch_coverage=1 00:13:22.011 --rc genhtml_function_coverage=1 00:13:22.011 --rc genhtml_legend=1 00:13:22.011 --rc geninfo_all_blocks=1 00:13:22.011 --rc geninfo_unexecuted_blocks=1 00:13:22.011 00:13:22.011 ' 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:22.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.011 --rc genhtml_branch_coverage=1 00:13:22.011 --rc genhtml_function_coverage=1 00:13:22.011 --rc genhtml_legend=1 00:13:22.011 --rc geninfo_all_blocks=1 00:13:22.011 --rc geninfo_unexecuted_blocks=1 00:13:22.011 00:13:22.011 ' 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:22.011 20:31:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65545 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65545 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65545 ']' 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.011 20:31:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:22.011 [2024-11-25 20:31:30.125837] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:13:22.011 [2024-11-25 20:31:30.125967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65545 ] 00:13:22.268 [2024-11-25 20:31:30.327439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.524 [2024-11-25 20:31:30.488437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.524 [2024-11-25 20:31:30.488628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.524 [2024-11-25 20:31:30.488805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.524 [2024-11-25 20:31:30.488931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.460 nvme0n1 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WuEbq.txt 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.460 true 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732566691 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65568 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:23.460 20:31:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:25.990 [2024-11-25 20:31:33.554961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:25.990 [2024-11-25 20:31:33.555465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:25.990 [2024-11-25 20:31:33.555601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:25.990 [2024-11-25 20:31:33.555712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.990 [2024-11-25 20:31:33.557743] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65568 00:13:25.990 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65568 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65568 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WuEbq.txt 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:25.990 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WuEbq.txt 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65545 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65545 ']' 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65545 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65545 00:13:25.991 killing process with pid 65545 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65545' 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65545 00:13:25.991 20:31:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65545 00:13:28.616 20:31:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:28.616 20:31:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:28.616 00:13:28.616 real 0m6.486s 00:13:28.616 user 0m22.461s 00:13:28.616 sys 0m0.841s 00:13:28.616 20:31:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.616 20:31:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:28.616 ************************************ 00:13:28.616 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:28.616 ************************************ 00:13:28.616 20:31:36 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:28.616 20:31:36 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:28.616 20:31:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.616 20:31:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.616 20:31:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.616 ************************************ 00:13:28.616 START TEST nvme_fio 00:13:28.616 ************************************ 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:28.616 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:28.616 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:28.617 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:28.875 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:28.875 20:31:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:28.875 20:31:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:29.133 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:29.133 fio-3.35 00:13:29.133 Starting 1 thread 00:13:33.318 00:13:33.318 test: (groupid=0, jobs=1): err= 0: pid=65728: Mon Nov 25 20:31:40 2024 00:13:33.318 read: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(172MiB/2001msec) 00:13:33.318 slat (usec): min=3, max=455, avg= 4.35, stdev= 2.58 00:13:33.318 clat (usec): min=217, max=13140, avg=2901.77, stdev=415.75 00:13:33.318 lat (usec): min=221, max=13216, avg=2906.12, stdev=416.19 00:13:33.318 clat percentiles (usec): 00:13:33.318 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2704], 00:13:33.318 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2900], 00:13:33.318 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3425], 00:13:33.318 | 99.00th=[ 4047], 99.50th=[ 4686], 99.90th=[ 7570], 99.95th=[10421], 00:13:33.318 | 99.99th=[12911] 00:13:33.318 bw ( KiB/s): min=85648, max=89080, per=99.62%, avg=87634.67, stdev=1778.89, samples=3 00:13:33.318 iops : min=21412, max=22270, avg=21908.67, stdev=444.72, samples=3 00:13:33.318 write: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec); 0 zone resets 00:13:33.318 slat (usec): min=3, max=565, avg= 4.73, stdev= 3.08 00:13:33.318 clat (usec): min=209, max=12953, avg=2910.21, stdev=421.79 00:13:33.318 lat (usec): min=214, max=12974, avg=2914.94, stdev=422.17 00:13:33.318 clat percentiles (usec): 00:13:33.318 | 1.00th=[ 2114], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2737], 00:13:33.318 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:13:33.318 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3195], 95.00th=[ 3425], 00:13:33.318 | 99.00th=[ 4047], 99.50th=[ 4686], 99.90th=[ 8586], 99.95th=[10945], 00:13:33.318 | 99.99th=[12518] 00:13:33.318 bw ( KiB/s): min=85448, max=89880, per=100.00%, avg=87789.33, stdev=2226.61, samples=3 00:13:33.318 iops : min=21362, max=22470, avg=21947.33, stdev=556.65, samples=3 00:13:33.318 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:33.318 lat (msec) : 2=0.68%, 4=98.21%, 10=1.01%, 20=0.06% 00:13:33.318 cpu : usr=98.55%, sys=0.40%, ctx=15, majf=0, minf=608 00:13:33.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:33.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.318 issued rwts: total=44007,43728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.318 00:13:33.318 Run status group 0 (all jobs): 00:13:33.318 READ: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=172MiB (180MB), run=2001-2001msec 00:13:33.318 WRITE: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:13:33.318 ----------------------------------------------------- 00:13:33.318 Suppressions used: 00:13:33.318 count bytes template 00:13:33.318 1 32 /usr/src/fio/parse.c 00:13:33.318 1 8 libtcmalloc_minimal.so 00:13:33.318 ----------------------------------------------------- 00:13:33.318 00:13:33.318 20:31:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:33.318 20:31:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:33.318 20:31:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:33.318 20:31:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:33.318 20:31:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:33.318 20:31:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:33.319 20:31:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:33.319 20:31:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.319 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.576 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.576 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.576 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:33.576 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:33.576 20:31:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.576 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:33.576 fio-3.35 00:13:33.576 Starting 1 thread 00:13:37.809 00:13:37.809 test: (groupid=0, jobs=1): err= 0: pid=65794: Mon Nov 25 20:31:45 2024 00:13:37.809 read: IOPS=21.8k, BW=85.0MiB/s (89.1MB/s)(170MiB/2001msec) 00:13:37.809 slat (nsec): min=3728, max=55626, avg=4395.63, stdev=1111.66 00:13:37.809 clat (usec): min=231, max=11911, avg=2937.10, stdev=373.97 00:13:37.809 lat (usec): min=235, max=11958, avg=2941.50, stdev=374.35 00:13:37.809 clat percentiles (usec): 00:13:37.809 | 1.00th=[ 2343], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:13:37.809 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:13:37.809 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3261], 00:13:37.809 | 99.00th=[ 4359], 99.50th=[ 5211], 99.90th=[ 6652], 99.95th=[ 9110], 00:13:37.809 | 99.99th=[11600] 00:13:37.809 bw ( KiB/s): min=85752, max=89792, per=100.00%, avg=87669.67, stdev=2027.76, samples=3 00:13:37.809 iops : min=21438, max=22448, avg=21917.33, stdev=506.95, samples=3 00:13:37.809 write: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(169MiB/2001msec); 0 zone resets 00:13:37.809 slat (nsec): min=3839, max=55655, avg=4815.99, stdev=1240.15 00:13:37.809 clat (usec): min=238, max=11790, avg=2942.10, stdev=381.82 00:13:37.809 lat (usec): min=243, max=11809, avg=2946.92, stdev=382.18 00:13:37.809 clat percentiles (usec): 00:13:37.809 | 1.00th=[ 2376], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:13:37.809 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2966], 00:13:37.809 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3261], 00:13:37.809 | 99.00th=[ 4424], 99.50th=[ 5211], 99.90th=[ 7177], 99.95th=[ 9634], 00:13:37.809 | 99.99th=[11338] 00:13:37.809 bw ( KiB/s): min=86768, max=89712, per=100.00%, avg=87851.00, stdev=1618.87, samples=3 00:13:37.809 iops : min=21692, max=22428, avg=21962.67, stdev=404.78, samples=3 00:13:37.809 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:37.809 lat (msec) : 2=0.38%, 4=98.18%, 10=1.37%, 20=0.04% 00:13:37.809 cpu : usr=99.20%, sys=0.15%, ctx=17, majf=0, minf=607 00:13:37.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:37.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.809 issued rwts: total=43533,43216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.809 00:13:37.809 Run status group 0 (all jobs): 00:13:37.809 READ: bw=85.0MiB/s (89.1MB/s), 85.0MiB/s-85.0MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 00:13:37.809 WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=169MiB (177MB), run=2001-2001msec 00:13:37.809 ----------------------------------------------------- 00:13:37.809 Suppressions used: 00:13:37.809 count bytes template 00:13:37.809 1 32 /usr/src/fio/parse.c 00:13:37.809 1 8 libtcmalloc_minimal.so 00:13:37.809 ----------------------------------------------------- 00:13:37.809 00:13:37.809 20:31:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:37.809 20:31:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:37.809 20:31:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:37.809 20:31:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:37.809 20:31:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:37.809 20:31:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:38.067 20:31:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:38.067 20:31:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:38.067 20:31:46 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:38.325 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:38.325 fio-3.35 00:13:38.325 Starting 1 thread 00:13:42.509 00:13:42.509 test: (groupid=0, jobs=1): err= 0: pid=65860: Mon Nov 25 20:31:50 2024 00:13:42.509 read: IOPS=22.0k, BW=86.0MiB/s (90.2MB/s)(172MiB/2001msec) 00:13:42.509 slat (nsec): min=3779, max=70437, avg=4512.98, stdev=1224.43 00:13:42.509 clat (usec): min=214, max=11027, avg=2900.33, stdev=401.14 00:13:42.509 lat (usec): min=218, max=11071, avg=2904.85, stdev=401.70 00:13:42.509 clat percentiles (usec): 00:13:42.509 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:42.509 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:13:42.509 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3032], 00:13:42.509 | 99.00th=[ 4555], 99.50th=[ 5800], 99.90th=[ 8291], 99.95th=[ 8848], 00:13:42.509 | 99.99th=[10814] 00:13:42.509 bw ( KiB/s): min=86120, max=89296, per=100.00%, avg=88128.00, stdev=1746.70, samples=3 00:13:42.509 iops : min=21530, max=22324, avg=22032.00, stdev=436.67, samples=3 00:13:42.509 write: IOPS=21.9k, BW=85.5MiB/s (89.6MB/s)(171MiB/2001msec); 0 zone resets 00:13:42.509 slat (nsec): min=3893, max=47250, avg=4899.66, stdev=1214.19 00:13:42.509 clat (usec): min=239, max=10913, avg=2905.69, stdev=409.29 00:13:42.509 lat (usec): min=243, max=10930, avg=2910.59, stdev=409.83 00:13:42.509 clat percentiles (usec): 00:13:42.509 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:42.509 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:13:42.509 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3032], 00:13:42.509 | 99.00th=[ 4686], 99.50th=[ 5997], 99.90th=[ 8291], 99.95th=[ 9110], 00:13:42.509 | 99.99th=[10552] 00:13:42.509 bw ( KiB/s): min=85744, max=90168, per=100.00%, avg=88320.00, stdev=2300.09, samples=3 00:13:42.509 iops : min=21436, max=22542, avg=22080.00, stdev=575.02, samples=3 00:13:42.509 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:42.509 lat (msec) : 2=0.04%, 4=98.38%, 10=1.52%, 20=0.02% 00:13:42.509 cpu : usr=99.30%, sys=0.10%, ctx=3, majf=0, minf=607 00:13:42.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:42.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.509 issued rwts: total=44061,43777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.509 00:13:42.509 Run status group 0 (all jobs): 00:13:42.509 READ: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=172MiB (180MB), run=2001-2001msec 00:13:42.509 WRITE: bw=85.5MiB/s (89.6MB/s), 85.5MiB/s-85.5MiB/s (89.6MB/s-89.6MB/s), io=171MiB (179MB), run=2001-2001msec 00:13:42.509 ----------------------------------------------------- 00:13:42.509 Suppressions used: 00:13:42.509 count bytes template 00:13:42.509 1 32 /usr/src/fio/parse.c 00:13:42.509 1 8 libtcmalloc_minimal.so 00:13:42.509 ----------------------------------------------------- 00:13:42.509 00:13:42.509 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:42.509 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:42.509 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:42.509 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:42.509 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:42.509 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:43.078 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:43.078 20:31:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:43.078 20:31:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:43.078 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:43.078 fio-3.35 00:13:43.078 Starting 1 thread 00:13:48.360 00:13:48.360 test: (groupid=0, jobs=1): err= 0: pid=65926: Mon Nov 25 20:31:55 2024 00:13:48.360 read: IOPS=21.0k, BW=82.1MiB/s (86.1MB/s)(164MiB/2001msec) 00:13:48.360 slat (nsec): min=4404, max=53138, avg=5278.52, stdev=1190.19 00:13:48.360 clat (usec): min=202, max=14780, avg=3038.98, stdev=429.36 00:13:48.360 lat (usec): min=208, max=14833, avg=3044.26, stdev=429.93 00:13:48.360 clat percentiles (usec): 00:13:48.360 | 1.00th=[ 2802], 5.00th=[ 2868], 10.00th=[ 2900], 20.00th=[ 2933], 00:13:48.360 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:13:48.360 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3130], 95.00th=[ 3228], 00:13:48.360 | 99.00th=[ 4015], 99.50th=[ 5800], 99.90th=[ 8848], 99.95th=[11207], 00:13:48.360 | 99.99th=[14353] 00:13:48.360 bw ( KiB/s): min=81352, max=85160, per=99.68%, avg=83816.00, stdev=2136.82, samples=3 00:13:48.360 iops : min=20338, max=21290, avg=20954.00, stdev=534.21, samples=3 00:13:48.360 write: IOPS=20.9k, BW=81.7MiB/s (85.6MB/s)(163MiB/2001msec); 0 zone resets 00:13:48.360 slat (nsec): min=4418, max=68417, avg=5448.19, stdev=1244.71 00:13:48.360 clat (usec): min=236, max=14530, avg=3037.28, stdev=436.68 00:13:48.360 lat (usec): min=241, max=14551, avg=3042.73, stdev=437.26 00:13:48.360 clat percentiles (usec): 00:13:48.360 | 1.00th=[ 2802], 5.00th=[ 2868], 10.00th=[ 2900], 20.00th=[ 2933], 00:13:48.360 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:13:48.360 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3097], 95.00th=[ 3228], 00:13:48.360 | 99.00th=[ 4047], 99.50th=[ 6063], 99.90th=[ 8979], 99.95th=[11469], 00:13:48.360 | 99.99th=[13829] 00:13:48.360 bw ( KiB/s): min=81256, max=85240, per=100.00%, avg=83888.00, stdev=2279.66, samples=3 00:13:48.360 iops : min=20314, max=21310, avg=20972.00, stdev=569.92, samples=3 00:13:48.360 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:48.360 lat (msec) : 2=0.05%, 4=98.83%, 10=1.01%, 20=0.07% 00:13:48.360 cpu : usr=99.30%, sys=0.20%, ctx=5, majf=0, minf=605 00:13:48.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:48.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.360 issued rwts: total=42064,41836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.360 00:13:48.360 Run status group 0 (all jobs): 00:13:48.360 READ: bw=82.1MiB/s (86.1MB/s), 82.1MiB/s-82.1MiB/s (86.1MB/s-86.1MB/s), io=164MiB (172MB), run=2001-2001msec 00:13:48.360 WRITE: bw=81.7MiB/s (85.6MB/s), 81.7MiB/s-81.7MiB/s (85.6MB/s-85.6MB/s), io=163MiB (171MB), run=2001-2001msec 00:13:48.360 ----------------------------------------------------- 00:13:48.360 Suppressions used: 00:13:48.360 count bytes template 00:13:48.360 1 32 /usr/src/fio/parse.c 00:13:48.360 1 8 libtcmalloc_minimal.so 00:13:48.360 ----------------------------------------------------- 00:13:48.360 00:13:48.360 20:31:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:48.360 20:31:56 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:48.360 00:13:48.360 real 0m19.950s 00:13:48.360 user 0m14.802s 00:13:48.360 sys 0m6.159s 00:13:48.360 20:31:56 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.360 ************************************ 00:13:48.360 END TEST nvme_fio 00:13:48.360 ************************************ 00:13:48.360 20:31:56 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:48.360 00:13:48.360 real 1m35.419s 00:13:48.360 user 3m44.075s 00:13:48.360 sys 0m25.361s 00:13:48.360 ************************************ 00:13:48.360 END TEST nvme 00:13:48.360 ************************************ 00:13:48.360 20:31:56 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.360 20:31:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.360 20:31:56 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:48.360 20:31:56 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:48.360 20:31:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.360 20:31:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.360 20:31:56 -- common/autotest_common.sh@10 -- # set +x 00:13:48.360 ************************************ 00:13:48.360 START TEST nvme_scc 00:13:48.360 ************************************ 00:13:48.360 20:31:56 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:48.360 * Looking for test storage... 00:13:48.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:48.360 20:31:56 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.360 20:31:56 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.360 20:31:56 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.360 20:31:56 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:48.360 20:31:56 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.619 20:31:56 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:48.619 20:31:56 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.619 20:31:56 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.619 --rc genhtml_branch_coverage=1 00:13:48.619 --rc genhtml_function_coverage=1 00:13:48.619 --rc genhtml_legend=1 00:13:48.619 --rc geninfo_all_blocks=1 00:13:48.619 --rc geninfo_unexecuted_blocks=1 00:13:48.619 00:13:48.619 ' 00:13:48.619 20:31:56 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.619 --rc genhtml_branch_coverage=1 00:13:48.619 --rc genhtml_function_coverage=1 00:13:48.619 --rc genhtml_legend=1 00:13:48.619 --rc geninfo_all_blocks=1 00:13:48.619 --rc geninfo_unexecuted_blocks=1 00:13:48.619 00:13:48.619 ' 00:13:48.620 20:31:56 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.620 --rc genhtml_branch_coverage=1 00:13:48.620 --rc genhtml_function_coverage=1 00:13:48.620 --rc genhtml_legend=1 00:13:48.620 --rc geninfo_all_blocks=1 00:13:48.620 --rc geninfo_unexecuted_blocks=1 00:13:48.620 00:13:48.620 ' 00:13:48.620 20:31:56 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.620 --rc genhtml_branch_coverage=1 00:13:48.620 --rc genhtml_function_coverage=1 00:13:48.620 --rc genhtml_legend=1 00:13:48.620 --rc geninfo_all_blocks=1 00:13:48.620 --rc geninfo_unexecuted_blocks=1 00:13:48.620 00:13:48.620 ' 00:13:48.620 20:31:56 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.620 20:31:56 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.620 20:31:56 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.620 20:31:56 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.620 20:31:56 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.620 20:31:56 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.620 20:31:56 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.620 20:31:56 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.620 20:31:56 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:48.620 20:31:56 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:48.620 20:31:56 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:48.620 20:31:56 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.620 20:31:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:48.620 20:31:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:48.620 20:31:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:48.620 20:31:56 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:49.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.448 Waiting for block devices as requested 00:13:49.448 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.448 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.707 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.707 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:54.985 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:54.985 20:32:02 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:54.985 20:32:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:54.985 20:32:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:54.985 20:32:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:54.985 20:32:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.985 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:54.986 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.987 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.988 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.989 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.990 20:32:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.991 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:54.992 20:32:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:54.992 20:32:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:54.992 20:32:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:54.992 20:32:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.992 20:32:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.993 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.994 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:54.995 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.996 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.997 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.998 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:54.999 20:32:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:55.000 20:32:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.000 20:32:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:55.000 20:32:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.000 20:32:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:55.000 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.266 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:55.267 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:55.268 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:55.269 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.270 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.271 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.272 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:55.273 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.274 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.275 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.276 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.277 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.278 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.279 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:55.280 20:32:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.280 20:32:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:55.280 20:32:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.280 20:32:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:55.280 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:55.281 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.282 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:55.283 20:32:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:55.283 20:32:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:55.284 20:32:03 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:55.284 20:32:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:55.284 20:32:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:55.284 20:32:03 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:56.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.789 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.789 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.789 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.789 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.047 20:32:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:57.047 20:32:05 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:57.047 20:32:05 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.047 20:32:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:57.047 ************************************ 00:13:57.047 START TEST nvme_simple_copy 00:13:57.047 ************************************ 00:13:57.047 20:32:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:57.306 Initializing NVMe Controllers 00:13:57.306 Attaching to 0000:00:10.0 00:13:57.306 Controller supports SCC. Attached to 0000:00:10.0 00:13:57.306 Namespace ID: 1 size: 6GB 00:13:57.306 Initialization complete. 00:13:57.306 00:13:57.306 Controller QEMU NVMe Ctrl (12340 ) 00:13:57.306 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:57.306 Namespace Block Size:4096 00:13:57.306 Writing LBAs 0 to 63 with Random Data 00:13:57.306 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:57.306 LBAs matching Written Data: 64 00:13:57.306 00:13:57.306 real 0m0.321s 00:13:57.306 user 0m0.116s 00:13:57.306 sys 0m0.104s 00:13:57.306 20:32:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.306 20:32:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:57.306 ************************************ 00:13:57.306 END TEST nvme_simple_copy 00:13:57.306 ************************************ 00:13:57.306 00:13:57.306 real 0m9.103s 00:13:57.306 user 0m1.532s 00:13:57.306 sys 0m2.403s 00:13:57.306 20:32:05 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.306 20:32:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:57.306 ************************************ 00:13:57.306 END TEST nvme_scc 00:13:57.306 ************************************ 00:13:57.565 20:32:05 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:57.565 20:32:05 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:57.565 20:32:05 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:57.565 20:32:05 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:57.565 20:32:05 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:57.565 20:32:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.565 20:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.565 20:32:05 -- common/autotest_common.sh@10 -- # set +x 00:13:57.565 ************************************ 00:13:57.565 START TEST nvme_fdp 00:13:57.565 ************************************ 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:57.565 * Looking for test storage... 00:13:57.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.565 20:32:05 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.565 --rc genhtml_branch_coverage=1 00:13:57.565 --rc genhtml_function_coverage=1 00:13:57.565 --rc genhtml_legend=1 00:13:57.565 --rc geninfo_all_blocks=1 00:13:57.565 --rc geninfo_unexecuted_blocks=1 00:13:57.565 00:13:57.565 ' 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.565 --rc genhtml_branch_coverage=1 00:13:57.565 --rc genhtml_function_coverage=1 00:13:57.565 --rc genhtml_legend=1 00:13:57.565 --rc geninfo_all_blocks=1 00:13:57.565 --rc geninfo_unexecuted_blocks=1 00:13:57.565 00:13:57.565 ' 00:13:57.565 20:32:05 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.566 --rc genhtml_branch_coverage=1 00:13:57.566 --rc genhtml_function_coverage=1 00:13:57.566 --rc genhtml_legend=1 00:13:57.566 --rc geninfo_all_blocks=1 00:13:57.566 --rc geninfo_unexecuted_blocks=1 00:13:57.566 00:13:57.566 ' 00:13:57.566 20:32:05 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.566 --rc genhtml_branch_coverage=1 00:13:57.566 --rc genhtml_function_coverage=1 00:13:57.566 --rc genhtml_legend=1 00:13:57.566 --rc geninfo_all_blocks=1 00:13:57.566 --rc geninfo_unexecuted_blocks=1 00:13:57.566 00:13:57.566 ' 00:13:57.566 20:32:05 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:57.566 20:32:05 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.825 20:32:05 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.825 20:32:05 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.825 20:32:05 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.825 20:32:05 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.825 20:32:05 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.825 20:32:05 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.825 20:32:05 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.825 20:32:05 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:57.825 20:32:05 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:57.825 20:32:05 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:57.825 20:32:05 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.825 20:32:05 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:58.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.393 Waiting for block devices as requested 00:13:58.653 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.653 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.653 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.913 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:04.197 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:04.197 20:32:11 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:14:04.197 20:32:11 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:04.197 20:32:11 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.197 20:32:11 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:04.197 20:32:11 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:04.197 20:32:11 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:04.198 20:32:11 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.198 20:32:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:04.198 20:32:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.198 20:32:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:04.198 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:04.199 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:04.200 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.201 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.202 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:14:04.203 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.204 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.205 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.206 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:04.207 20:32:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.207 20:32:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:04.207 20:32:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.207 20:32:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:04.207 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:04.208 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:04.209 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.210 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:14:04.211 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.212 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.213 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:04.214 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.215 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:04.216 20:32:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.216 20:32:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:04.216 20:32:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.216 20:32:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.216 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:04.217 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.218 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.219 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:14:04.220 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.221 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.222 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.488 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:14:04.489 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.490 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.491 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.492 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.493 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.494 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.495 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.496 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:04.497 20:32:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.497 20:32:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:04.497 20:32:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.497 20:32:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.497 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:04.498 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:04.499 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.500 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:04.501 20:32:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:14:04.501 20:32:12 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:14:04.762 20:32:12 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:14:04.762 20:32:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:14:04.763 20:32:12 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:14:04.763 20:32:12 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:05.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:06.267 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:06.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:06.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:06.267 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:06.268 20:32:14 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:06.268 20:32:14 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:06.268 20:32:14 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.268 20:32:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:06.268 ************************************ 00:14:06.268 START TEST nvme_flexible_data_placement 00:14:06.268 ************************************ 00:14:06.268 20:32:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:06.527 Initializing NVMe Controllers 00:14:06.527 Attaching to 0000:00:13.0 00:14:06.527 Controller supports FDP Attached to 0000:00:13.0 00:14:06.527 Namespace ID: 1 Endurance Group ID: 1 00:14:06.527 Initialization complete. 00:14:06.527 00:14:06.527 ================================== 00:14:06.527 == FDP tests for Namespace: #01 == 00:14:06.527 ================================== 00:14:06.527 00:14:06.527 Get Feature: FDP: 00:14:06.527 ================= 00:14:06.527 Enabled: Yes 00:14:06.527 FDP configuration Index: 0 00:14:06.527 00:14:06.527 FDP configurations log page 00:14:06.527 =========================== 00:14:06.527 Number of FDP configurations: 1 00:14:06.527 Version: 0 00:14:06.527 Size: 112 00:14:06.527 FDP Configuration Descriptor: 0 00:14:06.527 Descriptor Size: 96 00:14:06.527 Reclaim Group Identifier format: 2 00:14:06.527 FDP Volatile Write Cache: Not Present 00:14:06.527 FDP Configuration: Valid 00:14:06.527 Vendor Specific Size: 0 00:14:06.527 Number of Reclaim Groups: 2 00:14:06.527 Number of Recalim Unit Handles: 8 00:14:06.527 Max Placement Identifiers: 128 00:14:06.527 Number of Namespaces Suppprted: 256 00:14:06.527 Reclaim unit Nominal Size: 6000000 bytes 00:14:06.527 Estimated Reclaim Unit Time Limit: Not Reported 00:14:06.527 RUH Desc #000: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #001: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #002: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #003: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #004: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #005: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #006: RUH Type: Initially Isolated 00:14:06.527 RUH Desc #007: RUH Type: Initially Isolated 00:14:06.527 00:14:06.527 FDP reclaim unit handle usage log page 00:14:06.527 ====================================== 00:14:06.527 Number of Reclaim Unit Handles: 8 00:14:06.527 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:06.527 RUH Usage Desc #001: RUH Attributes: Unused 00:14:06.527 RUH Usage Desc #002: RUH Attributes: Unused 00:14:06.527 RUH Usage Desc #003: RUH Attributes: Unused 00:14:06.527 RUH Usage Desc #004: RUH Attributes: Unused 00:14:06.527 RUH Usage Desc #005: RUH Attributes: Unused 00:14:06.527 RUH Usage Desc #006: RUH Attributes: Unused 00:14:06.527 RUH Usage Desc #007: RUH Attributes: Unused 00:14:06.527 00:14:06.527 FDP statistics log page 00:14:06.527 ======================= 00:14:06.527 Host bytes with metadata written: 944533504 00:14:06.527 Media bytes with metadata written: 944627712 00:14:06.527 Media bytes erased: 0 00:14:06.527 00:14:06.527 FDP Reclaim unit handle status 00:14:06.527 ============================== 00:14:06.527 Number of RUHS descriptors: 2 00:14:06.527 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003b39 00:14:06.527 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:06.527 00:14:06.527 FDP write on placement id: 0 success 00:14:06.527 00:14:06.527 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:06.527 00:14:06.527 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:06.527 00:14:06.528 Get Feature: FDP Events for Placement handle: #0 00:14:06.528 ======================== 00:14:06.528 Number of FDP Events: 6 00:14:06.528 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:06.528 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:06.528 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:06.528 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:06.528 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:06.528 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:06.528 00:14:06.528 FDP events log page 00:14:06.528 =================== 00:14:06.528 Number of FDP events: 1 00:14:06.528 FDP Event #0: 00:14:06.528 Event Type: RU Not Written to Capacity 00:14:06.528 Placement Identifier: Valid 00:14:06.528 NSID: Valid 00:14:06.528 Location: Valid 00:14:06.528 Placement Identifier: 0 00:14:06.528 Event Timestamp: 8 00:14:06.528 Namespace Identifier: 1 00:14:06.528 Reclaim Group Identifier: 0 00:14:06.528 Reclaim Unit Handle Identifier: 0 00:14:06.528 00:14:06.528 FDP test passed 00:14:06.787 00:14:06.787 real 0m0.303s 00:14:06.787 user 0m0.100s 00:14:06.787 sys 0m0.096s 00:14:06.787 20:32:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.787 20:32:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:06.787 ************************************ 00:14:06.787 END TEST nvme_flexible_data_placement 00:14:06.787 ************************************ 00:14:06.787 00:14:06.787 real 0m9.261s 00:14:06.787 user 0m1.747s 00:14:06.787 sys 0m2.635s 00:14:06.787 20:32:14 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.787 20:32:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:06.787 ************************************ 00:14:06.787 END TEST nvme_fdp 00:14:06.787 ************************************ 00:14:06.787 20:32:14 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:14:06.787 20:32:14 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:06.787 20:32:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.787 20:32:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.787 20:32:14 -- common/autotest_common.sh@10 -- # set +x 00:14:06.787 ************************************ 00:14:06.787 START TEST nvme_rpc 00:14:06.787 ************************************ 00:14:06.787 20:32:14 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:07.047 * Looking for test storage... 00:14:07.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:07.047 20:32:14 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:07.047 20:32:14 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:07.047 20:32:14 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:07.047 20:32:15 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.047 20:32:15 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:14:07.047 20:32:15 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.047 20:32:15 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:07.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.047 --rc genhtml_branch_coverage=1 00:14:07.047 --rc genhtml_function_coverage=1 00:14:07.047 --rc genhtml_legend=1 00:14:07.047 --rc geninfo_all_blocks=1 00:14:07.047 --rc geninfo_unexecuted_blocks=1 00:14:07.047 00:14:07.047 ' 00:14:07.047 20:32:15 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:07.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.047 --rc genhtml_branch_coverage=1 00:14:07.047 --rc genhtml_function_coverage=1 00:14:07.047 --rc genhtml_legend=1 00:14:07.047 --rc geninfo_all_blocks=1 00:14:07.047 --rc geninfo_unexecuted_blocks=1 00:14:07.047 00:14:07.047 ' 00:14:07.047 20:32:15 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:07.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.047 --rc genhtml_branch_coverage=1 00:14:07.047 --rc genhtml_function_coverage=1 00:14:07.047 --rc genhtml_legend=1 00:14:07.047 --rc geninfo_all_blocks=1 00:14:07.047 --rc geninfo_unexecuted_blocks=1 00:14:07.047 00:14:07.047 ' 00:14:07.047 20:32:15 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:07.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.048 --rc genhtml_branch_coverage=1 00:14:07.048 --rc genhtml_function_coverage=1 00:14:07.048 --rc genhtml_legend=1 00:14:07.048 --rc geninfo_all_blocks=1 00:14:07.048 --rc geninfo_unexecuted_blocks=1 00:14:07.048 00:14:07.048 ' 00:14:07.048 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.048 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:07.048 20:32:15 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:07.308 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:07.308 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67338 00:14:07.308 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:07.308 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:07.308 20:32:15 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67338 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67338 ']' 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.308 20:32:15 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.308 [2024-11-25 20:32:15.314251] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:14:07.308 [2024-11-25 20:32:15.314417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67338 ] 00:14:07.567 [2024-11-25 20:32:15.502360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:07.567 [2024-11-25 20:32:15.657906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.567 [2024-11-25 20:32:15.657955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.946 20:32:16 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.946 20:32:16 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:08.946 20:32:16 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:08.946 Nvme0n1 00:14:08.946 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:08.946 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:09.209 request: 00:14:09.209 { 00:14:09.209 "bdev_name": "Nvme0n1", 00:14:09.209 "filename": "non_existing_file", 00:14:09.209 "method": "bdev_nvme_apply_firmware", 00:14:09.209 "req_id": 1 00:14:09.209 } 00:14:09.209 Got JSON-RPC error response 00:14:09.209 response: 00:14:09.209 { 00:14:09.209 "code": -32603, 00:14:09.209 "message": "open file failed." 00:14:09.209 } 00:14:09.209 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:09.209 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:09.209 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:09.475 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:09.475 20:32:17 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67338 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67338 ']' 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67338 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67338 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67338' 00:14:09.475 killing process with pid 67338 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67338 00:14:09.475 20:32:17 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67338 00:14:12.767 00:14:12.767 real 0m5.508s 00:14:12.767 user 0m9.854s 00:14:12.767 sys 0m1.018s 00:14:12.767 20:32:20 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.767 20:32:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.767 ************************************ 00:14:12.767 END TEST nvme_rpc 00:14:12.767 ************************************ 00:14:12.767 20:32:20 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:12.767 20:32:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:12.767 20:32:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.767 20:32:20 -- common/autotest_common.sh@10 -- # set +x 00:14:12.767 ************************************ 00:14:12.767 START TEST nvme_rpc_timeouts 00:14:12.767 ************************************ 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:12.767 * Looking for test storage... 00:14:12.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.767 20:32:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:12.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.767 --rc genhtml_branch_coverage=1 00:14:12.767 --rc genhtml_function_coverage=1 00:14:12.767 --rc genhtml_legend=1 00:14:12.767 --rc geninfo_all_blocks=1 00:14:12.767 --rc geninfo_unexecuted_blocks=1 00:14:12.767 00:14:12.767 ' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:12.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.767 --rc genhtml_branch_coverage=1 00:14:12.767 --rc genhtml_function_coverage=1 00:14:12.767 --rc genhtml_legend=1 00:14:12.767 --rc geninfo_all_blocks=1 00:14:12.767 --rc geninfo_unexecuted_blocks=1 00:14:12.767 00:14:12.767 ' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:12.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.767 --rc genhtml_branch_coverage=1 00:14:12.767 --rc genhtml_function_coverage=1 00:14:12.767 --rc genhtml_legend=1 00:14:12.767 --rc geninfo_all_blocks=1 00:14:12.767 --rc geninfo_unexecuted_blocks=1 00:14:12.767 00:14:12.767 ' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:12.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.767 --rc genhtml_branch_coverage=1 00:14:12.767 --rc genhtml_function_coverage=1 00:14:12.767 --rc genhtml_legend=1 00:14:12.767 --rc geninfo_all_blocks=1 00:14:12.767 --rc geninfo_unexecuted_blocks=1 00:14:12.767 00:14:12.767 ' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67425 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67425 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67457 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:12.767 20:32:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67457 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67457 ']' 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.767 20:32:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:12.767 [2024-11-25 20:32:20.768573] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:14:12.767 [2024-11-25 20:32:20.768944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67457 ] 00:14:13.027 [2024-11-25 20:32:20.962838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:13.027 [2024-11-25 20:32:21.129131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.027 [2024-11-25 20:32:21.129175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.406 Checking default timeout settings: 00:14:14.406 20:32:22 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.406 20:32:22 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:14:14.406 20:32:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:14.406 20:32:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:14.666 Making settings changes with rpc: 00:14:14.666 20:32:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:14.666 20:32:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:14.925 Check default vs. modified settings: 00:14:14.925 20:32:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:14.925 20:32:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:15.185 Setting action_on_timeout is changed as expected. 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:15.185 Setting timeout_us is changed as expected. 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:15.185 Setting timeout_admin_us is changed as expected. 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67425 /tmp/settings_modified_67425 00:14:15.185 20:32:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67457 00:14:15.185 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67457 ']' 00:14:15.186 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67457 00:14:15.186 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:14:15.186 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.186 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67457 00:14:15.446 killing process with pid 67457 00:14:15.446 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.446 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.446 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67457' 00:14:15.446 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67457 00:14:15.446 20:32:23 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67457 00:14:17.982 RPC TIMEOUT SETTING TEST PASSED. 00:14:17.982 20:32:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:17.982 00:14:17.982 real 0m5.720s 00:14:17.982 user 0m10.580s 00:14:17.982 sys 0m1.042s 00:14:18.240 ************************************ 00:14:18.240 END TEST nvme_rpc_timeouts 00:14:18.240 ************************************ 00:14:18.240 20:32:26 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.240 20:32:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:18.240 20:32:26 -- spdk/autotest.sh@239 -- # uname -s 00:14:18.240 20:32:26 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:18.240 20:32:26 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:18.240 20:32:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.240 20:32:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.240 20:32:26 -- common/autotest_common.sh@10 -- # set +x 00:14:18.240 ************************************ 00:14:18.240 START TEST sw_hotplug 00:14:18.240 ************************************ 00:14:18.240 20:32:26 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:18.240 * Looking for test storage... 00:14:18.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:18.240 20:32:26 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:18.240 20:32:26 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:14:18.240 20:32:26 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:18.499 20:32:26 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:18.499 20:32:26 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.500 20:32:26 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:18.500 20:32:26 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.500 20:32:26 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.500 --rc genhtml_branch_coverage=1 00:14:18.500 --rc genhtml_function_coverage=1 00:14:18.500 --rc genhtml_legend=1 00:14:18.500 --rc geninfo_all_blocks=1 00:14:18.500 --rc geninfo_unexecuted_blocks=1 00:14:18.500 00:14:18.500 ' 00:14:18.500 20:32:26 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.500 --rc genhtml_branch_coverage=1 00:14:18.500 --rc genhtml_function_coverage=1 00:14:18.500 --rc genhtml_legend=1 00:14:18.500 --rc geninfo_all_blocks=1 00:14:18.500 --rc geninfo_unexecuted_blocks=1 00:14:18.500 00:14:18.500 ' 00:14:18.500 20:32:26 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.500 --rc genhtml_branch_coverage=1 00:14:18.500 --rc genhtml_function_coverage=1 00:14:18.500 --rc genhtml_legend=1 00:14:18.500 --rc geninfo_all_blocks=1 00:14:18.500 --rc geninfo_unexecuted_blocks=1 00:14:18.500 00:14:18.500 ' 00:14:18.500 20:32:26 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:18.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.500 --rc genhtml_branch_coverage=1 00:14:18.500 --rc genhtml_function_coverage=1 00:14:18.500 --rc genhtml_legend=1 00:14:18.500 --rc geninfo_all_blocks=1 00:14:18.500 --rc geninfo_unexecuted_blocks=1 00:14:18.500 00:14:18.500 ' 00:14:18.500 20:32:26 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:19.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:19.367 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:19.367 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:19.367 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:19.367 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:19.367 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:19.367 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:19.367 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:19.367 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:19.367 20:32:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:19.368 20:32:27 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:19.368 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:19.368 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:19.368 20:32:27 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:19.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:20.194 Waiting for block devices as requested 00:14:20.194 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.452 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.452 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:20.452 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:25.738 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:25.738 20:32:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:25.738 20:32:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:26.306 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:26.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:26.306 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:26.875 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:27.134 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.134 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.134 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:27.134 20:32:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68357 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:27.393 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:27.393 20:32:35 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:27.394 20:32:35 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:27.394 20:32:35 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:27.394 20:32:35 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:27.394 20:32:35 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:27.394 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:27.394 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:27.394 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:27.394 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:27.394 20:32:35 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:27.394 Initializing NVMe Controllers 00:14:27.652 Attaching to 0000:00:10.0 00:14:27.652 Attaching to 0000:00:11.0 00:14:27.652 Attached to 0000:00:10.0 00:14:27.652 Attached to 0000:00:11.0 00:14:27.652 Initialization complete. Starting I/O... 00:14:27.652 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:27.652 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:14:27.652 00:14:28.590 QEMU NVMe Ctrl (12340 ): 1464 I/Os completed (+1464) 00:14:28.590 QEMU NVMe Ctrl (12341 ): 1464 I/Os completed (+1464) 00:14:28.590 00:14:29.526 QEMU NVMe Ctrl (12340 ): 3264 I/Os completed (+1800) 00:14:29.526 QEMU NVMe Ctrl (12341 ): 3272 I/Os completed (+1808) 00:14:29.526 00:14:30.462 QEMU NVMe Ctrl (12340 ): 5272 I/Os completed (+2008) 00:14:30.462 QEMU NVMe Ctrl (12341 ): 5280 I/Os completed (+2008) 00:14:30.462 00:14:31.841 QEMU NVMe Ctrl (12340 ): 7216 I/Os completed (+1944) 00:14:31.841 QEMU NVMe Ctrl (12341 ): 7224 I/Os completed (+1944) 00:14:31.841 00:14:32.408 QEMU NVMe Ctrl (12340 ): 9236 I/Os completed (+2020) 00:14:32.408 QEMU NVMe Ctrl (12341 ): 9244 I/Os completed (+2020) 00:14:32.408 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.347 [2024-11-25 20:32:41.309372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:33.347 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:33.347 [2024-11-25 20:32:41.311409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.311507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.311613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.311643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:33.347 [2024-11-25 20:32:41.315017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.315170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.315223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.315316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:33.347 [2024-11-25 20:32:41.348534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:33.347 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:33.347 [2024-11-25 20:32:41.350403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.350456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.350485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.350509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:33.347 [2024-11-25 20:32:41.353390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.353592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.353623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 [2024-11-25 20:32:41.353642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:33.347 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:33.347 EAL: Scan for (pci) bus failed. 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.347 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:33.606 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:33.606 Attaching to 0000:00:10.0 00:14:33.606 Attached to 0000:00:10.0 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:33.606 20:32:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:33.606 Attaching to 0000:00:11.0 00:14:33.606 Attached to 0000:00:11.0 00:14:34.550 QEMU NVMe Ctrl (12340 ): 1780 I/Os completed (+1780) 00:14:34.550 QEMU NVMe Ctrl (12341 ): 1572 I/Os completed (+1572) 00:14:34.550 00:14:35.488 QEMU NVMe Ctrl (12340 ): 3648 I/Os completed (+1868) 00:14:35.488 QEMU NVMe Ctrl (12341 ): 3440 I/Os completed (+1868) 00:14:35.488 00:14:36.427 QEMU NVMe Ctrl (12340 ): 5508 I/Os completed (+1860) 00:14:36.427 QEMU NVMe Ctrl (12341 ): 5300 I/Os completed (+1860) 00:14:36.427 00:14:37.808 QEMU NVMe Ctrl (12340 ): 7260 I/Os completed (+1752) 00:14:37.808 QEMU NVMe Ctrl (12341 ): 7052 I/Os completed (+1752) 00:14:37.808 00:14:38.745 QEMU NVMe Ctrl (12340 ): 9160 I/Os completed (+1900) 00:14:38.745 QEMU NVMe Ctrl (12341 ): 8955 I/Os completed (+1903) 00:14:38.745 00:14:39.684 QEMU NVMe Ctrl (12340 ): 11096 I/Os completed (+1936) 00:14:39.684 QEMU NVMe Ctrl (12341 ): 10891 I/Os completed (+1936) 00:14:39.684 00:14:40.622 QEMU NVMe Ctrl (12340 ): 13040 I/Os completed (+1944) 00:14:40.622 QEMU NVMe Ctrl (12341 ): 12835 I/Os completed (+1944) 00:14:40.622 00:14:41.559 QEMU NVMe Ctrl (12340 ): 14960 I/Os completed (+1920) 00:14:41.559 QEMU NVMe Ctrl (12341 ): 14762 I/Os completed (+1927) 00:14:41.559 00:14:42.496 QEMU NVMe Ctrl (12340 ): 17044 I/Os completed (+2084) 00:14:42.496 QEMU NVMe Ctrl (12341 ): 16846 I/Os completed (+2084) 00:14:42.496 00:14:43.459 QEMU NVMe Ctrl (12340 ): 19268 I/Os completed (+2224) 00:14:43.459 QEMU NVMe Ctrl (12341 ): 19070 I/Os completed (+2224) 00:14:43.459 00:14:44.396 QEMU NVMe Ctrl (12340 ): 21452 I/Os completed (+2184) 00:14:44.396 QEMU NVMe Ctrl (12341 ): 21254 I/Os completed (+2184) 00:14:44.396 00:14:45.773 QEMU NVMe Ctrl (12340 ): 23676 I/Os completed (+2224) 00:14:45.773 QEMU NVMe Ctrl (12341 ): 23478 I/Os completed (+2224) 00:14:45.773 00:14:45.773 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:45.773 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:45.773 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.773 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.773 [2024-11-25 20:32:53.678400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:45.773 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:45.773 [2024-11-25 20:32:53.680185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.680376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.680433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.680531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:45.773 [2024-11-25 20:32:53.683598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.683746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.683797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.683911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:45.773 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:45.773 [2024-11-25 20:32:53.716922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:45.773 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:45.773 [2024-11-25 20:32:53.719069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.719166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.719225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.719273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:45.773 [2024-11-25 20:32:53.722141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.722275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.773 [2024-11-25 20:32:53.722349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.774 [2024-11-25 20:32:53.722403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:45.774 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:45.774 EAL: Scan for (pci) bus failed. 00:14:45.774 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:45.774 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:45.774 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:45.774 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:45.774 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:46.033 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:46.033 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.033 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:46.033 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:46.033 20:32:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:46.033 Attaching to 0000:00:10.0 00:14:46.033 Attached to 0000:00:10.0 00:14:46.033 20:32:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:46.033 20:32:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:46.033 20:32:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:46.033 Attaching to 0000:00:11.0 00:14:46.033 Attached to 0000:00:11.0 00:14:46.601 QEMU NVMe Ctrl (12340 ): 1168 I/Os completed (+1168) 00:14:46.601 QEMU NVMe Ctrl (12341 ): 944 I/Os completed (+944) 00:14:46.601 00:14:47.538 QEMU NVMe Ctrl (12340 ): 3292 I/Os completed (+2124) 00:14:47.538 QEMU NVMe Ctrl (12341 ): 3068 I/Os completed (+2124) 00:14:47.538 00:14:48.475 QEMU NVMe Ctrl (12340 ): 5424 I/Os completed (+2132) 00:14:48.475 QEMU NVMe Ctrl (12341 ): 5200 I/Os completed (+2132) 00:14:48.475 00:14:49.412 QEMU NVMe Ctrl (12340 ): 7640 I/Os completed (+2216) 00:14:49.412 QEMU NVMe Ctrl (12341 ): 7416 I/Os completed (+2216) 00:14:49.412 00:14:50.789 QEMU NVMe Ctrl (12340 ): 9571 I/Os completed (+1931) 00:14:50.789 QEMU NVMe Ctrl (12341 ): 9348 I/Os completed (+1932) 00:14:50.789 00:14:51.728 QEMU NVMe Ctrl (12340 ): 11539 I/Os completed (+1968) 00:14:51.728 QEMU NVMe Ctrl (12341 ): 11316 I/Os completed (+1968) 00:14:51.728 00:14:52.666 QEMU NVMe Ctrl (12340 ): 13539 I/Os completed (+2000) 00:14:52.666 QEMU NVMe Ctrl (12341 ): 13316 I/Os completed (+2000) 00:14:52.666 00:14:53.603 QEMU NVMe Ctrl (12340 ): 15535 I/Os completed (+1996) 00:14:53.603 QEMU NVMe Ctrl (12341 ): 15312 I/Os completed (+1996) 00:14:53.603 00:14:54.541 QEMU NVMe Ctrl (12340 ): 17516 I/Os completed (+1981) 00:14:54.541 QEMU NVMe Ctrl (12341 ): 17292 I/Os completed (+1980) 00:14:54.541 00:14:55.481 QEMU NVMe Ctrl (12340 ): 19356 I/Os completed (+1840) 00:14:55.481 QEMU NVMe Ctrl (12341 ): 19132 I/Os completed (+1840) 00:14:55.481 00:14:56.417 QEMU NVMe Ctrl (12340 ): 21300 I/Os completed (+1944) 00:14:56.417 QEMU NVMe Ctrl (12341 ): 21078 I/Os completed (+1946) 00:14:56.417 00:14:57.796 QEMU NVMe Ctrl (12340 ): 23196 I/Os completed (+1896) 00:14:57.796 QEMU NVMe Ctrl (12341 ): 22974 I/Os completed (+1896) 00:14:57.796 00:14:58.056 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:58.056 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:58.056 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:58.056 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:58.056 [2024-11-25 20:33:06.060028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:58.056 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:58.056 [2024-11-25 20:33:06.061942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.062126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.062242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.062300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:58.056 [2024-11-25 20:33:06.065489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.065561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.065585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.065606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:14:58.056 EAL: Scan for (pci) bus failed. 00:14:58.056 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:58.056 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:58.056 [2024-11-25 20:33:06.090395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:58.056 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:58.056 [2024-11-25 20:33:06.092117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.092212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.092267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.092311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:58.056 [2024-11-25 20:33:06.095078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.095127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.056 [2024-11-25 20:33:06.095155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.057 [2024-11-25 20:33:06.095173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:58.057 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:58.057 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:58.057 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:58.057 EAL: Scan for (pci) bus failed. 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:58.316 Attaching to 0000:00:10.0 00:14:58.316 Attached to 0000:00:10.0 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:58.316 20:33:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:58.316 Attaching to 0000:00:11.0 00:14:58.316 Attached to 0000:00:11.0 00:14:58.316 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:58.316 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:58.316 [2024-11-25 20:33:06.440439] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:10.527 20:33:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:10.527 20:33:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:10.527 20:33:18 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.13 00:15:10.527 20:33:18 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.13 00:15:10.527 20:33:18 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:10.527 20:33:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.13 00:15:10.527 20:33:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.13 2 00:15:10.527 remove_attach_helper took 43.13s to complete (handling 2 nvme drive(s)) 20:33:18 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:17.097 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68357 00:15:17.097 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68357) - No such process 00:15:17.097 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68357 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68894 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:17.098 20:33:24 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68894 00:15:17.098 20:33:24 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68894 ']' 00:15:17.098 20:33:24 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.098 20:33:24 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.098 20:33:24 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.098 20:33:24 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.098 20:33:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 [2024-11-25 20:33:24.556776] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:15:17.098 [2024-11-25 20:33:24.557116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:15:17.098 [2024-11-25 20:33:24.739734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.098 [2024-11-25 20:33:24.883527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:18.118 20:33:25 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:18.118 20:33:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.686 20:33:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.686 20:33:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.686 20:33:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.686 [2024-11-25 20:33:32.006121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:24.686 [2024-11-25 20:33:32.008884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.008939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.008962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 [2024-11-25 20:33:32.008995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.009008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.009024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 [2024-11-25 20:33:32.009038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.009053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.009065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 [2024-11-25 20:33:32.009087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.009099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.009115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 20:33:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:24.686 [2024-11-25 20:33:32.405481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:24.686 [2024-11-25 20:33:32.408221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.408267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.408289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 [2024-11-25 20:33:32.408314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.408344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.408373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 [2024-11-25 20:33:32.408390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.408402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 [2024-11-25 20:33:32.408432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:24.686 [2024-11-25 20:33:32.408445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.686 [2024-11-25 20:33:32.408457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:24.686 20:33:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.686 20:33:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:24.686 20:33:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:24.686 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:24.944 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:24.944 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:24.944 20:33:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:37.240 20:33:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.240 20:33:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.240 20:33:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:37.240 20:33:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:37.240 20:33:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.240 20:33:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:37.240 [2024-11-25 20:33:45.085153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:37.240 [2024-11-25 20:33:45.088011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.240 [2024-11-25 20:33:45.088063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.240 [2024-11-25 20:33:45.088083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.240 [2024-11-25 20:33:45.088114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.240 [2024-11-25 20:33:45.088126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.240 [2024-11-25 20:33:45.088142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.240 [2024-11-25 20:33:45.088157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.240 [2024-11-25 20:33:45.088172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.240 [2024-11-25 20:33:45.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.240 [2024-11-25 20:33:45.088201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.240 [2024-11-25 20:33:45.088212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.240 [2024-11-25 20:33:45.088228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.240 20:33:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:37.240 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:37.522 [2024-11-25 20:33:45.484516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:37.522 [2024-11-25 20:33:45.487495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.523 [2024-11-25 20:33:45.487542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.523 [2024-11-25 20:33:45.487569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.523 [2024-11-25 20:33:45.487598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.523 [2024-11-25 20:33:45.487614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.523 [2024-11-25 20:33:45.487627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.523 [2024-11-25 20:33:45.487644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.523 [2024-11-25 20:33:45.487656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.523 [2024-11-25 20:33:45.487672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.523 [2024-11-25 20:33:45.487685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:37.523 [2024-11-25 20:33:45.487700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:37.523 [2024-11-25 20:33:45.487713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:37.523 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:37.523 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:37.523 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:37.523 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:37.523 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:37.523 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:37.523 20:33:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.523 20:33:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:37.523 20:33:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:37.782 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:38.041 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:38.041 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:38.041 20:33:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:50.252 20:33:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:50.252 20:33:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:50.252 20:33:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:50.252 20:33:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:50.252 20:33:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:50.252 20:33:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:50.252 20:33:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.252 20:33:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:50.252 20:33:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.252 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:50.252 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:50.252 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:50.252 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:50.252 [2024-11-25 20:33:58.064359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:50.252 [2024-11-25 20:33:58.068062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.252 [2024-11-25 20:33:58.068220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.252 [2024-11-25 20:33:58.068379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.253 [2024-11-25 20:33:58.068510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.253 [2024-11-25 20:33:58.068554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.253 [2024-11-25 20:33:58.068668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.253 [2024-11-25 20:33:58.068820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.253 [2024-11-25 20:33:58.068865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.253 [2024-11-25 20:33:58.069037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.253 [2024-11-25 20:33:58.069105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.253 [2024-11-25 20:33:58.069227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.253 [2024-11-25 20:33:58.069292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:50.253 20:33:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.253 20:33:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:50.253 20:33:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:50.253 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:50.511 [2024-11-25 20:33:58.463709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:50.511 [2024-11-25 20:33:58.467076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.511 [2024-11-25 20:33:58.467124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.511 [2024-11-25 20:33:58.467149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.511 [2024-11-25 20:33:58.467177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.511 [2024-11-25 20:33:58.467194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.511 [2024-11-25 20:33:58.467207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.511 [2024-11-25 20:33:58.467226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.511 [2024-11-25 20:33:58.467239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.511 [2024-11-25 20:33:58.467262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.511 [2024-11-25 20:33:58.467276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:50.511 [2024-11-25 20:33:58.467293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.511 [2024-11-25 20:33:58.467305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:50.771 20:33:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.771 20:33:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:50.771 20:33:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:50.771 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:51.030 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:51.030 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:51.030 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:51.030 20:33:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:51.030 20:33:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:51.030 20:33:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:51.030 20:33:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:16:03.240 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:03.240 20:34:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:03.240 20:34:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.832 20:34:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.832 20:34:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.832 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.832 [2024-11-25 20:34:17.204543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:09.832 [2024-11-25 20:34:17.206659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.832 [2024-11-25 20:34:17.206819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.832 [2024-11-25 20:34:17.207023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.832 [2024-11-25 20:34:17.207216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.832 [2024-11-25 20:34:17.207257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.832 [2024-11-25 20:34:17.207312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.832 [2024-11-25 20:34:17.207433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.832 [2024-11-25 20:34:17.207478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.833 [2024-11-25 20:34:17.207531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.833 [2024-11-25 20:34:17.207585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.833 [2024-11-25 20:34:17.207702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.833 [2024-11-25 20:34:17.207761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.833 20:34:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:09.833 [2024-11-25 20:34:17.603927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:09.833 [2024-11-25 20:34:17.606582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.833 [2024-11-25 20:34:17.606737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.833 [2024-11-25 20:34:17.606888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.833 [2024-11-25 20:34:17.606958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.833 [2024-11-25 20:34:17.607053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.833 [2024-11-25 20:34:17.607110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.833 [2024-11-25 20:34:17.607209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.833 [2024-11-25 20:34:17.607293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.833 [2024-11-25 20:34:17.607431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.833 [2024-11-25 20:34:17.607489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:09.833 [2024-11-25 20:34:17.607643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:09.833 [2024-11-25 20:34:17.607739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.833 20:34:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.833 20:34:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.833 20:34:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:09.833 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:10.092 20:34:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.092 20:34:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.354 20:34:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.354 20:34:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.354 20:34:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.354 20:34:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.354 20:34:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.354 [2024-11-25 20:34:30.283553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:22.354 [2024-11-25 20:34:30.285683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.354 [2024-11-25 20:34:30.285742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.354 [2024-11-25 20:34:30.285762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.354 [2024-11-25 20:34:30.285793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.354 [2024-11-25 20:34:30.285806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.354 [2024-11-25 20:34:30.285822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.354 [2024-11-25 20:34:30.285838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.354 [2024-11-25 20:34:30.285853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.354 [2024-11-25 20:34:30.285866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.354 [2024-11-25 20:34:30.285883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.354 [2024-11-25 20:34:30.285895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.354 [2024-11-25 20:34:30.285910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.354 20:34:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:22.354 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:22.612 [2024-11-25 20:34:30.682943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:22.613 [2024-11-25 20:34:30.685112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.613 [2024-11-25 20:34:30.685303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.613 [2024-11-25 20:34:30.685351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.613 [2024-11-25 20:34:30.685381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.613 [2024-11-25 20:34:30.685402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.613 [2024-11-25 20:34:30.685415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.613 [2024-11-25 20:34:30.685434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.613 [2024-11-25 20:34:30.685446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.613 [2024-11-25 20:34:30.685461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.613 [2024-11-25 20:34:30.685475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.613 [2024-11-25 20:34:30.685490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.613 [2024-11-25 20:34:30.685502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.871 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:22.871 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:22.871 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:22.871 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.872 20:34:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.872 20:34:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 20:34:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:22.872 20:34:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:23.130 20:34:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:35.401 20:34:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.401 20:34:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:35.401 20:34:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:35.401 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:35.401 20:34:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.401 20:34:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:35.402 [2024-11-25 20:34:43.362513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:35.402 [2024-11-25 20:34:43.364662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.402 [2024-11-25 20:34:43.364725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.402 [2024-11-25 20:34:43.364744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.402 [2024-11-25 20:34:43.364774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.402 [2024-11-25 20:34:43.364786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.402 [2024-11-25 20:34:43.364805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.402 [2024-11-25 20:34:43.364834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.402 [2024-11-25 20:34:43.364854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.402 [2024-11-25 20:34:43.364867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.402 [2024-11-25 20:34:43.364884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.402 [2024-11-25 20:34:43.364896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.402 [2024-11-25 20:34:43.364911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.402 20:34:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.402 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:35.402 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:35.969 [2024-11-25 20:34:43.861727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:35.969 [2024-11-25 20:34:43.863797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.969 [2024-11-25 20:34:43.863841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.969 [2024-11-25 20:34:43.863864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-11-25 20:34:43.863892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.969 [2024-11-25 20:34:43.863907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.969 [2024-11-25 20:34:43.863920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-11-25 20:34:43.863937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.969 [2024-11-25 20:34:43.863948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.969 [2024-11-25 20:34:43.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 [2024-11-25 20:34:43.863979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.969 [2024-11-25 20:34:43.863998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.969 [2024-11-25 20:34:43.864010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:35.969 20:34:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.969 20:34:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:35.969 20:34:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:35.969 20:34:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:35.969 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:35.969 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:35.969 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:36.227 20:34:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:16:48.442 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:48.442 20:34:56 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68894 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68894 ']' 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68894 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68894 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68894' 00:16:48.442 killing process with pid 68894 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68894 00:16:48.442 20:34:56 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68894 00:16:50.990 20:34:59 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:51.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.131 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.131 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.131 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:52.131 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:52.390 00:16:52.390 real 2m34.143s 00:16:52.390 user 1m52.195s 00:16:52.390 sys 0m22.111s 00:16:52.390 ************************************ 00:16:52.390 END TEST sw_hotplug 00:16:52.390 ************************************ 00:16:52.390 20:35:00 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.390 20:35:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:52.390 20:35:00 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:52.390 20:35:00 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:52.390 20:35:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:52.390 20:35:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.390 20:35:00 -- common/autotest_common.sh@10 -- # set +x 00:16:52.390 ************************************ 00:16:52.390 START TEST nvme_xnvme 00:16:52.390 ************************************ 00:16:52.390 20:35:00 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:52.652 * Looking for test storage... 00:16:52.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.652 20:35:00 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.652 --rc genhtml_branch_coverage=1 00:16:52.652 --rc genhtml_function_coverage=1 00:16:52.652 --rc genhtml_legend=1 00:16:52.652 --rc geninfo_all_blocks=1 00:16:52.652 --rc geninfo_unexecuted_blocks=1 00:16:52.652 00:16:52.652 ' 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.652 --rc genhtml_branch_coverage=1 00:16:52.652 --rc genhtml_function_coverage=1 00:16:52.652 --rc genhtml_legend=1 00:16:52.652 --rc geninfo_all_blocks=1 00:16:52.652 --rc geninfo_unexecuted_blocks=1 00:16:52.652 00:16:52.652 ' 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.652 --rc genhtml_branch_coverage=1 00:16:52.652 --rc genhtml_function_coverage=1 00:16:52.652 --rc genhtml_legend=1 00:16:52.652 --rc geninfo_all_blocks=1 00:16:52.652 --rc geninfo_unexecuted_blocks=1 00:16:52.652 00:16:52.652 ' 00:16:52.652 20:35:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.652 --rc genhtml_branch_coverage=1 00:16:52.652 --rc genhtml_function_coverage=1 00:16:52.653 --rc genhtml_legend=1 00:16:52.653 --rc geninfo_all_blocks=1 00:16:52.653 --rc geninfo_unexecuted_blocks=1 00:16:52.653 00:16:52.653 ' 00:16:52.653 20:35:00 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:52.653 20:35:00 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:52.653 20:35:00 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:52.653 20:35:00 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:52.653 20:35:00 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:52.653 #define SPDK_CONFIG_H 00:16:52.653 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:52.653 #define SPDK_CONFIG_APPS 1 00:16:52.653 #define SPDK_CONFIG_ARCH native 00:16:52.653 #define SPDK_CONFIG_ASAN 1 00:16:52.653 #undef SPDK_CONFIG_AVAHI 00:16:52.653 #undef SPDK_CONFIG_CET 00:16:52.654 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:52.654 #define SPDK_CONFIG_COVERAGE 1 00:16:52.654 #define SPDK_CONFIG_CROSS_PREFIX 00:16:52.654 #undef SPDK_CONFIG_CRYPTO 00:16:52.654 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:52.654 #undef SPDK_CONFIG_CUSTOMOCF 00:16:52.654 #undef SPDK_CONFIG_DAOS 00:16:52.654 #define SPDK_CONFIG_DAOS_DIR 00:16:52.654 #define SPDK_CONFIG_DEBUG 1 00:16:52.654 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:52.654 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:52.654 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:52.654 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:52.654 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:52.654 #undef SPDK_CONFIG_DPDK_UADK 00:16:52.654 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:52.654 #define SPDK_CONFIG_EXAMPLES 1 00:16:52.654 #undef SPDK_CONFIG_FC 00:16:52.654 #define SPDK_CONFIG_FC_PATH 00:16:52.654 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:52.654 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:52.654 #define SPDK_CONFIG_FSDEV 1 00:16:52.654 #undef SPDK_CONFIG_FUSE 00:16:52.654 #undef SPDK_CONFIG_FUZZER 00:16:52.654 #define SPDK_CONFIG_FUZZER_LIB 00:16:52.654 #undef SPDK_CONFIG_GOLANG 00:16:52.654 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:52.654 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:52.654 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:52.654 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:52.654 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:52.654 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:52.654 #undef SPDK_CONFIG_HAVE_LZ4 00:16:52.654 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:52.654 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:52.654 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:52.654 #define SPDK_CONFIG_IDXD 1 00:16:52.654 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:52.654 #undef SPDK_CONFIG_IPSEC_MB 00:16:52.654 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:52.654 #define SPDK_CONFIG_ISAL 1 00:16:52.654 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:52.654 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:52.654 #define SPDK_CONFIG_LIBDIR 00:16:52.654 #undef SPDK_CONFIG_LTO 00:16:52.654 #define SPDK_CONFIG_MAX_LCORES 128 00:16:52.654 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:52.654 #define SPDK_CONFIG_NVME_CUSE 1 00:16:52.654 #undef SPDK_CONFIG_OCF 00:16:52.654 #define SPDK_CONFIG_OCF_PATH 00:16:52.654 #define SPDK_CONFIG_OPENSSL_PATH 00:16:52.654 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:52.654 #define SPDK_CONFIG_PGO_DIR 00:16:52.654 #undef SPDK_CONFIG_PGO_USE 00:16:52.654 #define SPDK_CONFIG_PREFIX /usr/local 00:16:52.654 #undef SPDK_CONFIG_RAID5F 00:16:52.654 #undef SPDK_CONFIG_RBD 00:16:52.654 #define SPDK_CONFIG_RDMA 1 00:16:52.654 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:52.654 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:52.654 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:52.654 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:52.654 #define SPDK_CONFIG_SHARED 1 00:16:52.654 #undef SPDK_CONFIG_SMA 00:16:52.654 #define SPDK_CONFIG_TESTS 1 00:16:52.654 #undef SPDK_CONFIG_TSAN 00:16:52.654 #define SPDK_CONFIG_UBLK 1 00:16:52.654 #define SPDK_CONFIG_UBSAN 1 00:16:52.654 #undef SPDK_CONFIG_UNIT_TESTS 00:16:52.654 #undef SPDK_CONFIG_URING 00:16:52.654 #define SPDK_CONFIG_URING_PATH 00:16:52.654 #undef SPDK_CONFIG_URING_ZNS 00:16:52.654 #undef SPDK_CONFIG_USDT 00:16:52.654 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:52.654 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:52.654 #undef SPDK_CONFIG_VFIO_USER 00:16:52.654 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:52.654 #define SPDK_CONFIG_VHOST 1 00:16:52.654 #define SPDK_CONFIG_VIRTIO 1 00:16:52.654 #undef SPDK_CONFIG_VTUNE 00:16:52.654 #define SPDK_CONFIG_VTUNE_DIR 00:16:52.654 #define SPDK_CONFIG_WERROR 1 00:16:52.654 #define SPDK_CONFIG_WPDK_DIR 00:16:52.654 #define SPDK_CONFIG_XNVME 1 00:16:52.654 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:52.654 20:35:00 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.654 20:35:00 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.654 20:35:00 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.654 20:35:00 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.654 20:35:00 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.654 20:35:00 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.654 20:35:00 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.654 20:35:00 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.654 20:35:00 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:52.654 20:35:00 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:52.654 20:35:00 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:52.654 20:35:00 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:52.655 20:35:00 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70283 ]] 00:16:52.656 20:35:00 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70283 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ipKbLS 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.ipKbLS/tests/xnvme /tmp/spdk.ipKbLS 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13719650304 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5849317376 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:52.916 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13719650304 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5849317376 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95566118912 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4136660992 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:52.917 * Looking for test storage... 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13719650304 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:52.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.917 20:35:00 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.917 --rc genhtml_branch_coverage=1 00:16:52.917 --rc genhtml_function_coverage=1 00:16:52.917 --rc genhtml_legend=1 00:16:52.917 --rc geninfo_all_blocks=1 00:16:52.917 --rc geninfo_unexecuted_blocks=1 00:16:52.917 00:16:52.917 ' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.917 --rc genhtml_branch_coverage=1 00:16:52.917 --rc genhtml_function_coverage=1 00:16:52.917 --rc genhtml_legend=1 00:16:52.917 --rc geninfo_all_blocks=1 00:16:52.917 --rc geninfo_unexecuted_blocks=1 00:16:52.917 00:16:52.917 ' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.917 --rc genhtml_branch_coverage=1 00:16:52.917 --rc genhtml_function_coverage=1 00:16:52.917 --rc genhtml_legend=1 00:16:52.917 --rc geninfo_all_blocks=1 00:16:52.917 --rc geninfo_unexecuted_blocks=1 00:16:52.917 00:16:52.917 ' 00:16:52.917 20:35:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.917 --rc genhtml_branch_coverage=1 00:16:52.917 --rc genhtml_function_coverage=1 00:16:52.917 --rc genhtml_legend=1 00:16:52.917 --rc geninfo_all_blocks=1 00:16:52.917 --rc geninfo_unexecuted_blocks=1 00:16:52.917 00:16:52.917 ' 00:16:52.918 20:35:00 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.918 20:35:00 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.918 20:35:00 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.918 20:35:00 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.918 20:35:00 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.918 20:35:00 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.918 20:35:00 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.918 20:35:00 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.918 20:35:00 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:52.918 20:35:00 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:52.918 20:35:00 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:53.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:53.746 Waiting for block devices as requested 00:16:53.746 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:54.005 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:54.005 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:54.265 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:59.538 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:59.538 20:35:07 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:59.796 20:35:07 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:59.796 20:35:07 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:17:00.054 20:35:07 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:17:00.054 20:35:07 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:17:00.054 20:35:07 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:17:00.054 20:35:07 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:00.054 20:35:07 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:00.054 No valid GPT data, bailing 00:17:00.054 20:35:08 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:00.054 20:35:08 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:17:00.054 20:35:08 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:00.054 20:35:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:00.054 20:35:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.054 20:35:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.054 20:35:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.054 ************************************ 00:17:00.054 START TEST xnvme_rpc 00:17:00.054 ************************************ 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70685 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70685 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70685 ']' 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.054 20:35:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.054 [2024-11-25 20:35:08.174344] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:17:00.054 [2024-11-25 20:35:08.174465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70685 ] 00:17:00.312 [2024-11-25 20:35:08.359094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.570 [2024-11-25 20:35:08.504646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.507 xnvme_bdev 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:01.507 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70685 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70685 ']' 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70685 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70685 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.766 killing process with pid 70685 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70685' 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70685 00:17:01.766 20:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70685 00:17:05.052 00:17:05.052 real 0m4.476s 00:17:05.052 user 0m4.324s 00:17:05.052 sys 0m0.739s 00:17:05.052 20:35:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.052 20:35:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.052 ************************************ 00:17:05.052 END TEST xnvme_rpc 00:17:05.052 ************************************ 00:17:05.052 20:35:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:05.052 20:35:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.052 20:35:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.052 20:35:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.052 ************************************ 00:17:05.052 START TEST xnvme_bdevperf 00:17:05.052 ************************************ 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:05.052 20:35:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:05.052 { 00:17:05.052 "subsystems": [ 00:17:05.052 { 00:17:05.052 "subsystem": "bdev", 00:17:05.052 "config": [ 00:17:05.052 { 00:17:05.052 "params": { 00:17:05.052 "io_mechanism": "libaio", 00:17:05.052 "conserve_cpu": false, 00:17:05.052 "filename": "/dev/nvme0n1", 00:17:05.052 "name": "xnvme_bdev" 00:17:05.052 }, 00:17:05.052 "method": "bdev_xnvme_create" 00:17:05.052 }, 00:17:05.052 { 00:17:05.052 "method": "bdev_wait_for_examine" 00:17:05.052 } 00:17:05.052 ] 00:17:05.052 } 00:17:05.052 ] 00:17:05.052 } 00:17:05.052 [2024-11-25 20:35:12.718085] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:17:05.052 [2024-11-25 20:35:12.718229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70770 ] 00:17:05.052 [2024-11-25 20:35:12.903173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.052 [2024-11-25 20:35:13.048835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.620 Running I/O for 5 seconds... 00:17:07.492 40340.00 IOPS, 157.58 MiB/s [2024-11-25T20:35:16.633Z] 43218.00 IOPS, 168.82 MiB/s [2024-11-25T20:35:17.569Z] 43388.67 IOPS, 169.49 MiB/s [2024-11-25T20:35:18.506Z] 43453.50 IOPS, 169.74 MiB/s [2024-11-25T20:35:18.506Z] 43312.60 IOPS, 169.19 MiB/s 00:17:10.370 Latency(us) 00:17:10.370 [2024-11-25T20:35:18.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.370 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:10.370 xnvme_bdev : 5.01 43281.23 169.07 0.00 0.00 1475.44 184.24 7580.07 00:17:10.370 [2024-11-25T20:35:18.506Z] =================================================================================================================== 00:17:10.370 [2024-11-25T20:35:18.506Z] Total : 43281.23 169.07 0.00 0.00 1475.44 184.24 7580.07 00:17:11.744 20:35:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:11.744 20:35:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:11.744 20:35:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:11.744 20:35:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:11.744 20:35:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:11.744 { 00:17:11.744 "subsystems": [ 00:17:11.744 { 00:17:11.744 "subsystem": "bdev", 00:17:11.744 "config": [ 00:17:11.744 { 00:17:11.744 "params": { 00:17:11.744 "io_mechanism": "libaio", 00:17:11.744 "conserve_cpu": false, 00:17:11.744 "filename": "/dev/nvme0n1", 00:17:11.744 "name": "xnvme_bdev" 00:17:11.744 }, 00:17:11.744 "method": "bdev_xnvme_create" 00:17:11.744 }, 00:17:11.744 { 00:17:11.744 "method": "bdev_wait_for_examine" 00:17:11.744 } 00:17:11.744 ] 00:17:11.744 } 00:17:11.744 ] 00:17:11.744 } 00:17:11.744 [2024-11-25 20:35:19.827193] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:17:11.744 [2024-11-25 20:35:19.827340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70856 ] 00:17:12.003 [2024-11-25 20:35:20.020622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.262 [2024-11-25 20:35:20.167225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.520 Running I/O for 5 seconds... 00:17:14.830 40332.00 IOPS, 157.55 MiB/s [2024-11-25T20:35:23.902Z] 40798.50 IOPS, 159.37 MiB/s [2024-11-25T20:35:24.837Z] 39712.33 IOPS, 155.13 MiB/s [2024-11-25T20:35:25.793Z] 39241.25 IOPS, 153.29 MiB/s 00:17:17.657 Latency(us) 00:17:17.657 [2024-11-25T20:35:25.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.657 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:17.657 xnvme_bdev : 5.00 39232.14 153.25 0.00 0.00 1627.73 203.98 6079.85 00:17:17.657 [2024-11-25T20:35:25.793Z] =================================================================================================================== 00:17:17.657 [2024-11-25T20:35:25.793Z] Total : 39232.14 153.25 0.00 0.00 1627.73 203.98 6079.85 00:17:19.035 00:17:19.035 real 0m14.231s 00:17:19.035 user 0m5.338s 00:17:19.035 sys 0m5.962s 00:17:19.035 20:35:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.035 20:35:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:19.035 ************************************ 00:17:19.036 END TEST xnvme_bdevperf 00:17:19.036 ************************************ 00:17:19.036 20:35:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:19.036 20:35:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.036 20:35:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.036 20:35:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.036 ************************************ 00:17:19.036 START TEST xnvme_fio_plugin 00:17:19.036 ************************************ 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:19.036 20:35:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:19.036 { 00:17:19.036 "subsystems": [ 00:17:19.036 { 00:17:19.036 "subsystem": "bdev", 00:17:19.036 "config": [ 00:17:19.036 { 00:17:19.036 "params": { 00:17:19.036 "io_mechanism": "libaio", 00:17:19.036 "conserve_cpu": false, 00:17:19.036 "filename": "/dev/nvme0n1", 00:17:19.036 "name": "xnvme_bdev" 00:17:19.036 }, 00:17:19.036 "method": "bdev_xnvme_create" 00:17:19.036 }, 00:17:19.036 { 00:17:19.036 "method": "bdev_wait_for_examine" 00:17:19.036 } 00:17:19.036 ] 00:17:19.036 } 00:17:19.036 ] 00:17:19.036 } 00:17:19.036 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:19.036 fio-3.35 00:17:19.036 Starting 1 thread 00:17:25.600 00:17:25.600 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70981: Mon Nov 25 20:35:33 2024 00:17:25.600 read: IOPS=48.0k, BW=187MiB/s (196MB/s)(937MiB/5001msec) 00:17:25.600 slat (usec): min=4, max=1019, avg=18.33, stdev=27.64 00:17:25.600 clat (usec): min=53, max=5641, avg=787.27, stdev=455.77 00:17:25.600 lat (usec): min=67, max=5646, avg=805.60, stdev=457.74 00:17:25.600 clat percentiles (usec): 00:17:25.600 | 1.00th=[ 163], 5.00th=[ 241], 10.00th=[ 314], 20.00th=[ 433], 00:17:25.600 | 30.00th=[ 529], 40.00th=[ 619], 50.00th=[ 717], 60.00th=[ 816], 00:17:25.601 | 70.00th=[ 938], 80.00th=[ 1074], 90.00th=[ 1287], 95.00th=[ 1483], 00:17:25.601 | 99.00th=[ 2507], 99.50th=[ 3097], 99.90th=[ 4228], 99.95th=[ 4424], 00:17:25.601 | 99.99th=[ 4948] 00:17:25.601 bw ( KiB/s): min=166896, max=257432, per=96.41%, avg=184963.56, stdev=28500.42, samples=9 00:17:25.601 iops : min=41724, max=64358, avg=46240.89, stdev=7125.11, samples=9 00:17:25.601 lat (usec) : 100=0.08%, 250=5.47%, 500=21.38%, 750=26.48%, 1000=21.29% 00:17:25.601 lat (msec) : 2=23.45%, 4=1.70%, 10=0.15% 00:17:25.601 cpu : usr=24.48%, sys=55.86%, ctx=58, majf=0, minf=764 00:17:25.601 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=10.6%, 16=25.3%, 32=57.0%, >=64=1.9% 00:17:25.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.601 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:25.601 issued rwts: total=239861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:25.601 00:17:25.601 Run status group 0 (all jobs): 00:17:25.601 READ: bw=187MiB/s (196MB/s), 187MiB/s-187MiB/s (196MB/s-196MB/s), io=937MiB (982MB), run=5001-5001msec 00:17:26.538 ----------------------------------------------------- 00:17:26.538 Suppressions used: 00:17:26.538 count bytes template 00:17:26.538 1 11 /usr/src/fio/parse.c 00:17:26.538 1 8 libtcmalloc_minimal.so 00:17:26.538 1 904 libcrypto.so 00:17:26.538 ----------------------------------------------------- 00:17:26.538 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:26.538 20:35:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:26.538 { 00:17:26.538 "subsystems": [ 00:17:26.538 { 00:17:26.538 "subsystem": "bdev", 00:17:26.538 "config": [ 00:17:26.538 { 00:17:26.538 "params": { 00:17:26.538 "io_mechanism": "libaio", 00:17:26.538 "conserve_cpu": false, 00:17:26.538 "filename": "/dev/nvme0n1", 00:17:26.538 "name": "xnvme_bdev" 00:17:26.538 }, 00:17:26.538 "method": "bdev_xnvme_create" 00:17:26.538 }, 00:17:26.538 { 00:17:26.538 "method": "bdev_wait_for_examine" 00:17:26.538 } 00:17:26.538 ] 00:17:26.538 } 00:17:26.538 ] 00:17:26.538 } 00:17:26.798 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:26.798 fio-3.35 00:17:26.798 Starting 1 thread 00:17:33.520 00:17:33.520 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71078: Mon Nov 25 20:35:40 2024 00:17:33.520 write: IOPS=52.2k, BW=204MiB/s (214MB/s)(1020MiB/5001msec); 0 zone resets 00:17:33.520 slat (usec): min=4, max=2996, avg=16.75, stdev=32.35 00:17:33.520 clat (usec): min=85, max=5860, avg=739.05, stdev=389.20 00:17:33.520 lat (usec): min=143, max=5975, avg=755.80, stdev=389.61 00:17:33.520 clat percentiles (usec): 00:17:33.520 | 1.00th=[ 174], 5.00th=[ 265], 10.00th=[ 334], 20.00th=[ 441], 00:17:33.520 | 30.00th=[ 529], 40.00th=[ 611], 50.00th=[ 693], 60.00th=[ 775], 00:17:33.520 | 70.00th=[ 865], 80.00th=[ 979], 90.00th=[ 1156], 95.00th=[ 1319], 00:17:33.520 | 99.00th=[ 2089], 99.50th=[ 2769], 99.90th=[ 3982], 99.95th=[ 4359], 00:17:33.520 | 99.99th=[ 5211] 00:17:33.520 bw ( KiB/s): min=161408, max=242352, per=100.00%, avg=211559.11, stdev=28348.26, samples=9 00:17:33.520 iops : min=40352, max=60588, avg=52889.78, stdev=7087.06, samples=9 00:17:33.520 lat (usec) : 100=0.10%, 250=3.98%, 500=22.40%, 750=30.77%, 1000=24.13% 00:17:33.520 lat (msec) : 2=17.54%, 4=0.98%, 10=0.10% 00:17:33.520 cpu : usr=27.28%, sys=57.36%, ctx=28, majf=0, minf=764 00:17:33.520 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=9.4%, 16=24.8%, 32=59.6%, >=64=2.0% 00:17:33.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.520 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:17:33.520 issued rwts: total=0,261088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:33.520 00:17:33.520 Run status group 0 (all jobs): 00:17:33.520 WRITE: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=1020MiB (1069MB), run=5001-5001msec 00:17:34.088 ----------------------------------------------------- 00:17:34.088 Suppressions used: 00:17:34.088 count bytes template 00:17:34.088 1 11 /usr/src/fio/parse.c 00:17:34.088 1 8 libtcmalloc_minimal.so 00:17:34.088 1 904 libcrypto.so 00:17:34.088 ----------------------------------------------------- 00:17:34.088 00:17:34.088 00:17:34.088 real 0m15.248s 00:17:34.088 user 0m6.552s 00:17:34.088 sys 0m6.616s 00:17:34.088 20:35:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.088 20:35:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:34.088 ************************************ 00:17:34.088 END TEST xnvme_fio_plugin 00:17:34.088 ************************************ 00:17:34.088 20:35:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:34.088 20:35:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:34.088 20:35:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:34.088 20:35:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:34.088 20:35:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.088 20:35:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.088 20:35:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.348 ************************************ 00:17:34.348 START TEST xnvme_rpc 00:17:34.348 ************************************ 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71169 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71169 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71169 ']' 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.348 20:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.348 [2024-11-25 20:35:42.350769] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:17:34.348 [2024-11-25 20:35:42.351134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:17:34.607 [2024-11-25 20:35:42.535979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.607 [2024-11-25 20:35:42.677310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.983 xnvme_bdev 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:35.983 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71169 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71169 ']' 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71169 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71169 00:17:35.984 killing process with pid 71169 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71169' 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71169 00:17:35.984 20:35:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71169 00:17:38.516 ************************************ 00:17:38.516 END TEST xnvme_rpc 00:17:38.516 ************************************ 00:17:38.516 00:17:38.516 real 0m4.410s 00:17:38.516 user 0m4.269s 00:17:38.516 sys 0m0.732s 00:17:38.516 20:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.516 20:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.776 20:35:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:38.776 20:35:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:38.776 20:35:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.776 20:35:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:38.776 ************************************ 00:17:38.776 START TEST xnvme_bdevperf 00:17:38.776 ************************************ 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:38.776 20:35:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:38.776 { 00:17:38.776 "subsystems": [ 00:17:38.776 { 00:17:38.776 "subsystem": "bdev", 00:17:38.776 "config": [ 00:17:38.776 { 00:17:38.776 "params": { 00:17:38.776 "io_mechanism": "libaio", 00:17:38.776 "conserve_cpu": true, 00:17:38.776 "filename": "/dev/nvme0n1", 00:17:38.776 "name": "xnvme_bdev" 00:17:38.776 }, 00:17:38.776 "method": "bdev_xnvme_create" 00:17:38.776 }, 00:17:38.776 { 00:17:38.777 "method": "bdev_wait_for_examine" 00:17:38.777 } 00:17:38.777 ] 00:17:38.777 } 00:17:38.777 ] 00:17:38.777 } 00:17:38.777 [2024-11-25 20:35:46.810419] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:17:38.777 [2024-11-25 20:35:46.810544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71255 ] 00:17:39.036 [2024-11-25 20:35:46.992415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.036 [2024-11-25 20:35:47.136740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.603 Running I/O for 5 seconds... 00:17:41.476 39138.00 IOPS, 152.88 MiB/s [2024-11-25T20:35:50.985Z] 38518.00 IOPS, 150.46 MiB/s [2024-11-25T20:35:51.919Z] 39300.33 IOPS, 153.52 MiB/s [2024-11-25T20:35:52.855Z] 38833.25 IOPS, 151.69 MiB/s 00:17:44.719 Latency(us) 00:17:44.719 [2024-11-25T20:35:52.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.719 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:44.719 xnvme_bdev : 5.00 39173.79 153.02 0.00 0.00 1630.27 186.71 5921.93 00:17:44.719 [2024-11-25T20:35:52.855Z] =================================================================================================================== 00:17:44.719 [2024-11-25T20:35:52.855Z] Total : 39173.79 153.02 0.00 0.00 1630.27 186.71 5921.93 00:17:46.096 20:35:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:46.096 20:35:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:46.096 20:35:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:46.096 20:35:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:46.096 20:35:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:46.096 { 00:17:46.096 "subsystems": [ 00:17:46.096 { 00:17:46.096 "subsystem": "bdev", 00:17:46.096 "config": [ 00:17:46.096 { 00:17:46.096 "params": { 00:17:46.096 "io_mechanism": "libaio", 00:17:46.096 "conserve_cpu": true, 00:17:46.096 "filename": "/dev/nvme0n1", 00:17:46.096 "name": "xnvme_bdev" 00:17:46.096 }, 00:17:46.096 "method": "bdev_xnvme_create" 00:17:46.096 }, 00:17:46.096 { 00:17:46.096 "method": "bdev_wait_for_examine" 00:17:46.096 } 00:17:46.096 ] 00:17:46.096 } 00:17:46.096 ] 00:17:46.096 } 00:17:46.096 [2024-11-25 20:35:53.924289] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:17:46.096 [2024-11-25 20:35:53.924472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71337 ] 00:17:46.096 [2024-11-25 20:35:54.110666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.399 [2024-11-25 20:35:54.270139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.659 Running I/O for 5 seconds... 00:17:48.974 17007.00 IOPS, 66.43 MiB/s [2024-11-25T20:35:58.047Z] 13955.50 IOPS, 54.51 MiB/s [2024-11-25T20:35:58.986Z] 10834.67 IOPS, 42.32 MiB/s [2024-11-25T20:35:59.921Z] 12153.25 IOPS, 47.47 MiB/s [2024-11-25T20:35:59.921Z] 13542.80 IOPS, 52.90 MiB/s 00:17:51.785 Latency(us) 00:17:51.785 [2024-11-25T20:35:59.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.785 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:51.785 xnvme_bdev : 5.01 13541.93 52.90 0.00 0.00 4719.19 46.27 53902.70 00:17:51.785 [2024-11-25T20:35:59.921Z] =================================================================================================================== 00:17:51.785 [2024-11-25T20:35:59.921Z] Total : 13541.93 52.90 0.00 0.00 4719.19 46.27 53902.70 00:17:52.721 00:17:52.721 real 0m14.140s 00:17:52.721 user 0m8.096s 00:17:52.721 sys 0m4.087s 00:17:52.721 20:36:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.979 ************************************ 00:17:52.979 END TEST xnvme_bdevperf 00:17:52.979 ************************************ 00:17:52.979 20:36:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:52.979 20:36:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:52.979 20:36:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.979 20:36:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.979 20:36:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:52.979 ************************************ 00:17:52.979 START TEST xnvme_fio_plugin 00:17:52.979 ************************************ 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:52.979 { 00:17:52.979 "subsystems": [ 00:17:52.979 { 00:17:52.979 "subsystem": "bdev", 00:17:52.979 "config": [ 00:17:52.979 { 00:17:52.979 "params": { 00:17:52.979 "io_mechanism": "libaio", 00:17:52.979 "conserve_cpu": true, 00:17:52.979 "filename": "/dev/nvme0n1", 00:17:52.979 "name": "xnvme_bdev" 00:17:52.979 }, 00:17:52.979 "method": "bdev_xnvme_create" 00:17:52.979 }, 00:17:52.979 { 00:17:52.979 "method": "bdev_wait_for_examine" 00:17:52.979 } 00:17:52.979 ] 00:17:52.979 } 00:17:52.979 ] 00:17:52.979 } 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:52.979 20:36:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.238 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:53.238 fio-3.35 00:17:53.238 Starting 1 thread 00:17:59.822 00:17:59.822 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71462: Mon Nov 25 20:36:06 2024 00:17:59.822 read: IOPS=44.8k, BW=175MiB/s (183MB/s)(875MiB/5001msec) 00:17:59.822 slat (usec): min=4, max=1157, avg=19.52, stdev=31.10 00:17:59.822 clat (usec): min=84, max=5351, avg=850.36, stdev=493.60 00:17:59.822 lat (usec): min=120, max=5418, avg=869.88, stdev=496.33 00:17:59.822 clat percentiles (usec): 00:17:59.822 | 1.00th=[ 186], 5.00th=[ 265], 10.00th=[ 338], 20.00th=[ 461], 00:17:59.822 | 30.00th=[ 578], 40.00th=[ 685], 50.00th=[ 791], 60.00th=[ 906], 00:17:59.822 | 70.00th=[ 1012], 80.00th=[ 1139], 90.00th=[ 1336], 95.00th=[ 1565], 00:17:59.822 | 99.00th=[ 2868], 99.50th=[ 3490], 99.90th=[ 4424], 99.95th=[ 4686], 00:17:59.822 | 99.99th=[ 5080] 00:17:59.822 bw ( KiB/s): min=161872, max=200529, per=100.00%, avg=179237.60, stdev=14575.73, samples=10 00:17:59.822 iops : min=40468, max=50128, avg=44809.70, stdev=3643.88, samples=10 00:17:59.822 lat (usec) : 100=0.05%, 250=4.01%, 500=19.36%, 750=22.56%, 1000=22.95% 00:17:59.822 lat (msec) : 2=28.66%, 4=2.16%, 10=0.25% 00:17:59.822 cpu : usr=25.84%, sys=56.32%, ctx=55, majf=0, minf=764 00:17:59.822 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=10.9%, 16=26.1%, 32=56.5%, >=64=1.8% 00:17:59.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.822 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:59.822 issued rwts: total=223955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:59.822 00:17:59.822 Run status group 0 (all jobs): 00:17:59.822 READ: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=875MiB (917MB), run=5001-5001msec 00:18:00.391 ----------------------------------------------------- 00:18:00.391 Suppressions used: 00:18:00.391 count bytes template 00:18:00.391 1 11 /usr/src/fio/parse.c 00:18:00.391 1 8 libtcmalloc_minimal.so 00:18:00.391 1 904 libcrypto.so 00:18:00.391 ----------------------------------------------------- 00:18:00.391 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:00.391 { 00:18:00.391 "subsystems": [ 00:18:00.391 { 00:18:00.391 "subsystem": "bdev", 00:18:00.391 "config": [ 00:18:00.391 { 00:18:00.391 "params": { 00:18:00.391 "io_mechanism": "libaio", 00:18:00.391 "conserve_cpu": true, 00:18:00.391 "filename": "/dev/nvme0n1", 00:18:00.391 "name": "xnvme_bdev" 00:18:00.391 }, 00:18:00.391 "method": "bdev_xnvme_create" 00:18:00.391 }, 00:18:00.391 { 00:18:00.391 "method": "bdev_wait_for_examine" 00:18:00.391 } 00:18:00.391 ] 00:18:00.391 } 00:18:00.391 ] 00:18:00.391 } 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:00.391 20:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.651 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:00.651 fio-3.35 00:18:00.651 Starting 1 thread 00:18:07.252 00:18:07.252 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71559: Mon Nov 25 20:36:14 2024 00:18:07.252 write: IOPS=41.9k, BW=164MiB/s (172MB/s)(819MiB/5001msec); 0 zone resets 00:18:07.252 slat (usec): min=4, max=1153, avg=21.00, stdev=34.06 00:18:07.252 clat (usec): min=85, max=5764, avg=893.43, stdev=495.61 00:18:07.252 lat (usec): min=137, max=5854, avg=914.43, stdev=497.31 00:18:07.252 clat percentiles (usec): 00:18:07.252 | 1.00th=[ 192], 5.00th=[ 277], 10.00th=[ 351], 20.00th=[ 478], 00:18:07.252 | 30.00th=[ 603], 40.00th=[ 725], 50.00th=[ 848], 60.00th=[ 963], 00:18:07.252 | 70.00th=[ 1090], 80.00th=[ 1221], 90.00th=[ 1418], 95.00th=[ 1582], 00:18:07.252 | 99.00th=[ 2802], 99.50th=[ 3392], 99.90th=[ 4293], 99.95th=[ 4555], 00:18:07.252 | 99.99th=[ 5080] 00:18:07.252 bw ( KiB/s): min=148702, max=177928, per=97.84%, avg=164125.00, stdev=9464.16, samples=9 00:18:07.252 iops : min=37175, max=44482, avg=41031.11, stdev=2366.06, samples=9 00:18:07.252 lat (usec) : 100=0.08%, 250=3.44%, 500=18.19%, 750=20.43%, 1000=20.71% 00:18:07.252 lat (msec) : 2=34.89%, 4=2.05%, 10=0.20% 00:18:07.252 cpu : usr=23.88%, sys=59.46%, ctx=39, majf=0, minf=764 00:18:07.252 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=11.5%, 16=26.1%, 32=55.5%, >=64=1.8% 00:18:07.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.252 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:18:07.252 issued rwts: total=0,209720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.252 00:18:07.252 Run status group 0 (all jobs): 00:18:07.252 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=819MiB (859MB), run=5001-5001msec 00:18:08.190 ----------------------------------------------------- 00:18:08.190 Suppressions used: 00:18:08.190 count bytes template 00:18:08.190 1 11 /usr/src/fio/parse.c 00:18:08.190 1 8 libtcmalloc_minimal.so 00:18:08.190 1 904 libcrypto.so 00:18:08.190 ----------------------------------------------------- 00:18:08.190 00:18:08.190 00:18:08.190 real 0m15.128s 00:18:08.190 user 0m6.407s 00:18:08.190 sys 0m6.647s 00:18:08.190 20:36:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.190 20:36:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:08.190 ************************************ 00:18:08.190 END TEST xnvme_fio_plugin 00:18:08.190 ************************************ 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:08.190 20:36:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:08.190 20:36:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.190 20:36:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.190 20:36:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:08.190 ************************************ 00:18:08.190 START TEST xnvme_rpc 00:18:08.190 ************************************ 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:08.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71646 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71646 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71646 ']' 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.190 20:36:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.190 [2024-11-25 20:36:16.249468] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:18:08.190 [2024-11-25 20:36:16.249612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:18:08.450 [2024-11-25 20:36:16.435703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.450 [2024-11-25 20:36:16.579984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.829 xnvme_bdev 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71646 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71646 ']' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71646 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71646 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71646' 00:18:09.829 killing process with pid 71646 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71646 00:18:09.829 20:36:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71646 00:18:13.120 00:18:13.120 real 0m4.428s 00:18:13.120 user 0m4.338s 00:18:13.120 sys 0m0.721s 00:18:13.120 20:36:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.120 ************************************ 00:18:13.120 END TEST xnvme_rpc 00:18:13.120 ************************************ 00:18:13.120 20:36:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.120 20:36:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:13.120 20:36:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.120 20:36:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.120 20:36:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:13.120 ************************************ 00:18:13.120 START TEST xnvme_bdevperf 00:18:13.120 ************************************ 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:13.120 20:36:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:13.120 { 00:18:13.120 "subsystems": [ 00:18:13.120 { 00:18:13.120 "subsystem": "bdev", 00:18:13.120 "config": [ 00:18:13.120 { 00:18:13.120 "params": { 00:18:13.121 "io_mechanism": "io_uring", 00:18:13.121 "conserve_cpu": false, 00:18:13.121 "filename": "/dev/nvme0n1", 00:18:13.121 "name": "xnvme_bdev" 00:18:13.121 }, 00:18:13.121 "method": "bdev_xnvme_create" 00:18:13.121 }, 00:18:13.121 { 00:18:13.121 "method": "bdev_wait_for_examine" 00:18:13.121 } 00:18:13.121 ] 00:18:13.121 } 00:18:13.121 ] 00:18:13.121 } 00:18:13.121 [2024-11-25 20:36:20.736308] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:18:13.121 [2024-11-25 20:36:20.736454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71737 ] 00:18:13.121 [2024-11-25 20:36:20.916705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.121 [2024-11-25 20:36:21.061146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.379 Running I/O for 5 seconds... 00:18:15.689 36124.00 IOPS, 141.11 MiB/s [2024-11-25T20:36:24.759Z] 39625.50 IOPS, 154.79 MiB/s [2024-11-25T20:36:25.692Z] 37085.67 IOPS, 144.87 MiB/s [2024-11-25T20:36:26.628Z] 37886.25 IOPS, 147.99 MiB/s [2024-11-25T20:36:26.628Z] 36472.60 IOPS, 142.47 MiB/s 00:18:18.492 Latency(us) 00:18:18.492 [2024-11-25T20:36:26.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.492 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:18.492 xnvme_bdev : 5.01 36433.25 142.32 0.00 0.00 1750.90 193.29 11370.10 00:18:18.492 [2024-11-25T20:36:26.628Z] =================================================================================================================== 00:18:18.492 [2024-11-25T20:36:26.628Z] Total : 36433.25 142.32 0.00 0.00 1750.90 193.29 11370.10 00:18:19.868 20:36:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:19.868 20:36:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:19.868 20:36:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:19.868 20:36:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:19.868 20:36:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:19.868 { 00:18:19.868 "subsystems": [ 00:18:19.868 { 00:18:19.868 "subsystem": "bdev", 00:18:19.868 "config": [ 00:18:19.868 { 00:18:19.868 "params": { 00:18:19.868 "io_mechanism": "io_uring", 00:18:19.868 "conserve_cpu": false, 00:18:19.868 "filename": "/dev/nvme0n1", 00:18:19.868 "name": "xnvme_bdev" 00:18:19.868 }, 00:18:19.868 "method": "bdev_xnvme_create" 00:18:19.868 }, 00:18:19.868 { 00:18:19.868 "method": "bdev_wait_for_examine" 00:18:19.868 } 00:18:19.868 ] 00:18:19.868 } 00:18:19.868 ] 00:18:19.868 } 00:18:19.868 [2024-11-25 20:36:27.810350] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:18:19.868 [2024-11-25 20:36:27.810494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71819 ] 00:18:19.868 [2024-11-25 20:36:28.000101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.127 [2024-11-25 20:36:28.144629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.695 Running I/O for 5 seconds... 00:18:22.576 28852.00 IOPS, 112.70 MiB/s [2024-11-25T20:36:31.663Z] 28513.50 IOPS, 111.38 MiB/s [2024-11-25T20:36:32.597Z] 23773.33 IOPS, 92.86 MiB/s [2024-11-25T20:36:33.973Z] 25121.00 IOPS, 98.13 MiB/s [2024-11-25T20:36:33.973Z] 23056.00 IOPS, 90.06 MiB/s 00:18:25.837 Latency(us) 00:18:25.837 [2024-11-25T20:36:33.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.837 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:25.837 xnvme_bdev : 5.00 23055.76 90.06 0.00 0.00 2772.14 60.86 92645.27 00:18:25.837 [2024-11-25T20:36:33.973Z] =================================================================================================================== 00:18:25.837 [2024-11-25T20:36:33.973Z] Total : 23055.76 90.06 0.00 0.00 2772.14 60.86 92645.27 00:18:26.772 ************************************ 00:18:26.772 END TEST xnvme_bdevperf 00:18:26.772 ************************************ 00:18:26.772 00:18:26.772 real 0m14.128s 00:18:26.772 user 0m6.061s 00:18:26.772 sys 0m7.869s 00:18:26.772 20:36:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.772 20:36:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:26.772 20:36:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:26.773 20:36:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.773 20:36:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.773 20:36:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:26.773 ************************************ 00:18:26.773 START TEST xnvme_fio_plugin 00:18:26.773 ************************************ 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:26.773 20:36:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:26.773 { 00:18:26.773 "subsystems": [ 00:18:26.773 { 00:18:26.773 "subsystem": "bdev", 00:18:26.773 "config": [ 00:18:26.773 { 00:18:26.773 "params": { 00:18:26.773 "io_mechanism": "io_uring", 00:18:26.773 "conserve_cpu": false, 00:18:26.773 "filename": "/dev/nvme0n1", 00:18:26.773 "name": "xnvme_bdev" 00:18:26.773 }, 00:18:26.773 "method": "bdev_xnvme_create" 00:18:26.773 }, 00:18:26.773 { 00:18:26.773 "method": "bdev_wait_for_examine" 00:18:26.773 } 00:18:26.773 ] 00:18:26.773 } 00:18:26.773 ] 00:18:26.773 } 00:18:27.032 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:27.032 fio-3.35 00:18:27.032 Starting 1 thread 00:18:33.634 00:18:33.634 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71944: Mon Nov 25 20:36:40 2024 00:18:33.634 read: IOPS=29.2k, BW=114MiB/s (120MB/s)(571MiB/5001msec) 00:18:33.634 slat (usec): min=3, max=114, avg= 5.96, stdev= 2.24 00:18:33.634 clat (usec): min=1225, max=3623, avg=1952.56, stdev=314.18 00:18:33.634 lat (usec): min=1229, max=3645, avg=1958.52, stdev=315.24 00:18:33.634 clat percentiles (usec): 00:18:33.634 | 1.00th=[ 1418], 5.00th=[ 1516], 10.00th=[ 1582], 20.00th=[ 1680], 00:18:33.634 | 30.00th=[ 1745], 40.00th=[ 1827], 50.00th=[ 1909], 60.00th=[ 1991], 00:18:33.634 | 70.00th=[ 2114], 80.00th=[ 2212], 90.00th=[ 2376], 95.00th=[ 2540], 00:18:33.634 | 99.00th=[ 2769], 99.50th=[ 2900], 99.90th=[ 3163], 99.95th=[ 3261], 00:18:33.634 | 99.99th=[ 3490] 00:18:33.634 bw ( KiB/s): min=102400, max=129536, per=99.74%, avg=116508.44, stdev=9493.12, samples=9 00:18:33.634 iops : min=25600, max=32384, avg=29127.11, stdev=2373.28, samples=9 00:18:33.634 lat (msec) : 2=60.64%, 4=39.36% 00:18:33.634 cpu : usr=32.68%, sys=66.20%, ctx=13, majf=0, minf=762 00:18:33.634 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:33.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.634 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:33.634 issued rwts: total=146048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.634 00:18:33.634 Run status group 0 (all jobs): 00:18:33.634 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=571MiB (598MB), run=5001-5001msec 00:18:34.571 ----------------------------------------------------- 00:18:34.571 Suppressions used: 00:18:34.571 count bytes template 00:18:34.571 1 11 /usr/src/fio/parse.c 00:18:34.571 1 8 libtcmalloc_minimal.so 00:18:34.571 1 904 libcrypto.so 00:18:34.571 ----------------------------------------------------- 00:18:34.571 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:34.571 { 00:18:34.571 "subsystems": [ 00:18:34.571 { 00:18:34.571 "subsystem": "bdev", 00:18:34.571 "config": [ 00:18:34.571 { 00:18:34.571 "params": { 00:18:34.571 "io_mechanism": "io_uring", 00:18:34.571 "conserve_cpu": false, 00:18:34.571 "filename": "/dev/nvme0n1", 00:18:34.571 "name": "xnvme_bdev" 00:18:34.571 }, 00:18:34.571 "method": "bdev_xnvme_create" 00:18:34.571 }, 00:18:34.571 { 00:18:34.571 "method": "bdev_wait_for_examine" 00:18:34.571 } 00:18:34.571 ] 00:18:34.571 } 00:18:34.571 ] 00:18:34.571 } 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:34.571 20:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:34.571 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:34.571 fio-3.35 00:18:34.571 Starting 1 thread 00:18:41.293 00:18:41.293 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72041: Mon Nov 25 20:36:48 2024 00:18:41.293 write: IOPS=32.6k, BW=127MiB/s (134MB/s)(637MiB/5001msec); 0 zone resets 00:18:41.293 slat (nsec): min=2369, max=58463, avg=5307.61, stdev=2229.25 00:18:41.293 clat (usec): min=672, max=4745, avg=1749.65, stdev=412.71 00:18:41.293 lat (usec): min=680, max=4752, avg=1754.96, stdev=414.09 00:18:41.293 clat percentiles (usec): 00:18:41.293 | 1.00th=[ 955], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1287], 00:18:41.293 | 30.00th=[ 1614], 40.00th=[ 1713], 50.00th=[ 1795], 60.00th=[ 1876], 00:18:41.293 | 70.00th=[ 1975], 80.00th=[ 2073], 90.00th=[ 2245], 95.00th=[ 2376], 00:18:41.293 | 99.00th=[ 2606], 99.50th=[ 2704], 99.90th=[ 3097], 99.95th=[ 3294], 00:18:41.293 | 99.99th=[ 4686] 00:18:41.293 bw ( KiB/s): min=111104, max=202240, per=100.00%, avg=132664.89, stdev=29156.42, samples=9 00:18:41.293 iops : min=27776, max=50560, avg=33166.22, stdev=7289.10, samples=9 00:18:41.293 lat (usec) : 750=0.01%, 1000=2.65% 00:18:41.293 lat (msec) : 2=70.60%, 4=26.70%, 10=0.04% 00:18:41.293 cpu : usr=32.26%, sys=66.72%, ctx=7, majf=0, minf=762 00:18:41.293 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:41.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.293 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:41.293 issued rwts: total=0,163024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:41.293 00:18:41.293 Run status group 0 (all jobs): 00:18:41.293 WRITE: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=637MiB (668MB), run=5001-5001msec 00:18:41.862 ----------------------------------------------------- 00:18:41.862 Suppressions used: 00:18:41.862 count bytes template 00:18:41.862 1 11 /usr/src/fio/parse.c 00:18:41.862 1 8 libtcmalloc_minimal.so 00:18:41.862 1 904 libcrypto.so 00:18:41.862 ----------------------------------------------------- 00:18:41.862 00:18:42.122 00:18:42.122 real 0m15.164s 00:18:42.122 user 0m7.276s 00:18:42.122 sys 0m7.523s 00:18:42.122 20:36:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.122 ************************************ 00:18:42.122 END TEST xnvme_fio_plugin 00:18:42.122 ************************************ 00:18:42.122 20:36:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:42.123 20:36:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:42.123 20:36:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:42.123 20:36:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:42.123 20:36:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:42.123 20:36:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:42.123 20:36:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.123 20:36:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.123 ************************************ 00:18:42.123 START TEST xnvme_rpc 00:18:42.123 ************************************ 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72127 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72127 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72127 ']' 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.123 20:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.123 [2024-11-25 20:36:50.202704] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:18:42.123 [2024-11-25 20:36:50.203068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72127 ] 00:18:42.381 [2024-11-25 20:36:50.379172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.641 [2024-11-25 20:36:50.516989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 xnvme_bdev 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:43.576 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:43.835 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:43.835 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.835 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:43.835 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.835 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72127 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72127 ']' 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72127 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72127 00:18:43.836 killing process with pid 72127 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72127' 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72127 00:18:43.836 20:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72127 00:18:46.372 00:18:46.372 real 0m4.408s 00:18:46.372 user 0m4.346s 00:18:46.372 sys 0m0.706s 00:18:46.372 20:36:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.372 ************************************ 00:18:46.372 END TEST xnvme_rpc 00:18:46.372 ************************************ 00:18:46.372 20:36:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.630 20:36:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:46.630 20:36:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:46.630 20:36:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.630 20:36:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:46.630 ************************************ 00:18:46.630 START TEST xnvme_bdevperf 00:18:46.630 ************************************ 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:46.630 20:36:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:46.630 { 00:18:46.630 "subsystems": [ 00:18:46.630 { 00:18:46.630 "subsystem": "bdev", 00:18:46.630 "config": [ 00:18:46.630 { 00:18:46.630 "params": { 00:18:46.630 "io_mechanism": "io_uring", 00:18:46.630 "conserve_cpu": true, 00:18:46.630 "filename": "/dev/nvme0n1", 00:18:46.630 "name": "xnvme_bdev" 00:18:46.630 }, 00:18:46.630 "method": "bdev_xnvme_create" 00:18:46.630 }, 00:18:46.630 { 00:18:46.630 "method": "bdev_wait_for_examine" 00:18:46.630 } 00:18:46.630 ] 00:18:46.630 } 00:18:46.630 ] 00:18:46.630 } 00:18:46.630 [2024-11-25 20:36:54.666306] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:18:46.630 [2024-11-25 20:36:54.666601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72218 ] 00:18:46.890 [2024-11-25 20:36:54.852401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.890 [2024-11-25 20:36:54.997929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.458 Running I/O for 5 seconds... 00:18:49.332 42496.00 IOPS, 166.00 MiB/s [2024-11-25T20:36:58.400Z] 49023.50 IOPS, 191.50 MiB/s [2024-11-25T20:36:59.774Z] 52415.33 IOPS, 204.75 MiB/s [2024-11-25T20:37:00.710Z] 50943.50 IOPS, 199.00 MiB/s [2024-11-25T20:37:00.710Z] 50802.80 IOPS, 198.45 MiB/s 00:18:52.574 Latency(us) 00:18:52.574 [2024-11-25T20:37:00.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.574 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:52.574 xnvme_bdev : 5.01 50739.94 198.20 0.00 0.00 1258.05 733.66 7316.87 00:18:52.574 [2024-11-25T20:37:00.710Z] =================================================================================================================== 00:18:52.574 [2024-11-25T20:37:00.710Z] Total : 50739.94 198.20 0.00 0.00 1258.05 733.66 7316.87 00:18:53.511 20:37:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:53.511 20:37:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:53.511 20:37:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:53.511 20:37:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:53.511 20:37:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:53.770 { 00:18:53.770 "subsystems": [ 00:18:53.770 { 00:18:53.770 "subsystem": "bdev", 00:18:53.770 "config": [ 00:18:53.770 { 00:18:53.770 "params": { 00:18:53.770 "io_mechanism": "io_uring", 00:18:53.770 "conserve_cpu": true, 00:18:53.770 "filename": "/dev/nvme0n1", 00:18:53.770 "name": "xnvme_bdev" 00:18:53.770 }, 00:18:53.770 "method": "bdev_xnvme_create" 00:18:53.770 }, 00:18:53.770 { 00:18:53.770 "method": "bdev_wait_for_examine" 00:18:53.770 } 00:18:53.770 ] 00:18:53.770 } 00:18:53.770 ] 00:18:53.770 } 00:18:53.770 [2024-11-25 20:37:01.700850] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:18:53.770 [2024-11-25 20:37:01.701165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72304 ] 00:18:53.770 [2024-11-25 20:37:01.888548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.030 [2024-11-25 20:37:02.033983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.598 Running I/O for 5 seconds... 00:18:56.475 28032.00 IOPS, 109.50 MiB/s [2024-11-25T20:37:05.548Z] 26624.00 IOPS, 104.00 MiB/s [2024-11-25T20:37:06.543Z] 27904.00 IOPS, 109.00 MiB/s [2024-11-25T20:37:07.480Z] 28656.00 IOPS, 111.94 MiB/s [2024-11-25T20:37:07.480Z] 27980.80 IOPS, 109.30 MiB/s 00:18:59.344 Latency(us) 00:18:59.344 [2024-11-25T20:37:07.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.344 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:59.344 xnvme_bdev : 5.01 27949.58 109.18 0.00 0.00 2282.80 1052.79 7737.99 00:18:59.344 [2024-11-25T20:37:07.480Z] =================================================================================================================== 00:18:59.344 [2024-11-25T20:37:07.480Z] Total : 27949.58 109.18 0.00 0.00 2282.80 1052.79 7737.99 00:19:00.725 00:19:00.725 real 0m14.096s 00:19:00.725 user 0m7.765s 00:19:00.725 sys 0m5.871s 00:19:00.725 20:37:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.725 20:37:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:00.725 ************************************ 00:19:00.725 END TEST xnvme_bdevperf 00:19:00.725 ************************************ 00:19:00.725 20:37:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:00.725 20:37:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:00.725 20:37:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.725 20:37:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.725 ************************************ 00:19:00.725 START TEST xnvme_fio_plugin 00:19:00.725 ************************************ 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:00.725 20:37:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.725 { 00:19:00.725 "subsystems": [ 00:19:00.725 { 00:19:00.725 "subsystem": "bdev", 00:19:00.725 "config": [ 00:19:00.725 { 00:19:00.725 "params": { 00:19:00.725 "io_mechanism": "io_uring", 00:19:00.725 "conserve_cpu": true, 00:19:00.725 "filename": "/dev/nvme0n1", 00:19:00.725 "name": "xnvme_bdev" 00:19:00.725 }, 00:19:00.725 "method": "bdev_xnvme_create" 00:19:00.725 }, 00:19:00.725 { 00:19:00.725 "method": "bdev_wait_for_examine" 00:19:00.725 } 00:19:00.725 ] 00:19:00.725 } 00:19:00.725 ] 00:19:00.725 } 00:19:00.984 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:00.984 fio-3.35 00:19:00.984 Starting 1 thread 00:19:07.550 00:19:07.550 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72426: Mon Nov 25 20:37:14 2024 00:19:07.550 read: IOPS=26.7k, BW=104MiB/s (109MB/s)(522MiB/5001msec) 00:19:07.550 slat (usec): min=3, max=103, avg= 6.66, stdev= 2.60 00:19:07.550 clat (usec): min=1161, max=5060, avg=2129.54, stdev=345.81 00:19:07.550 lat (usec): min=1165, max=5075, avg=2136.19, stdev=347.08 00:19:07.550 clat percentiles (usec): 00:19:07.550 | 1.00th=[ 1336], 5.00th=[ 1467], 10.00th=[ 1582], 20.00th=[ 1860], 00:19:07.550 | 30.00th=[ 2008], 40.00th=[ 2114], 50.00th=[ 2180], 60.00th=[ 2245], 00:19:07.550 | 70.00th=[ 2311], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 2606], 00:19:07.550 | 99.00th=[ 2769], 99.50th=[ 2868], 99.90th=[ 3621], 99.95th=[ 4490], 00:19:07.550 | 99.99th=[ 4948] 00:19:07.550 bw ( KiB/s): min=96768, max=130560, per=100.00%, avg=107576.89, stdev=12106.51, samples=9 00:19:07.550 iops : min=24192, max=32640, avg=26894.22, stdev=3026.63, samples=9 00:19:07.550 lat (msec) : 2=28.95%, 4=70.96%, 10=0.09% 00:19:07.550 cpu : usr=48.14%, sys=47.68%, ctx=10, majf=0, minf=762 00:19:07.550 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:07.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.550 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:07.550 issued rwts: total=133632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.550 00:19:07.550 Run status group 0 (all jobs): 00:19:07.550 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=522MiB (547MB), run=5001-5001msec 00:19:08.486 ----------------------------------------------------- 00:19:08.486 Suppressions used: 00:19:08.486 count bytes template 00:19:08.486 1 11 /usr/src/fio/parse.c 00:19:08.486 1 8 libtcmalloc_minimal.so 00:19:08.486 1 904 libcrypto.so 00:19:08.486 ----------------------------------------------------- 00:19:08.486 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:08.486 { 00:19:08.486 "subsystems": [ 00:19:08.486 { 00:19:08.486 "subsystem": "bdev", 00:19:08.486 "config": [ 00:19:08.486 { 00:19:08.486 "params": { 00:19:08.486 "io_mechanism": "io_uring", 00:19:08.486 "conserve_cpu": true, 00:19:08.486 "filename": "/dev/nvme0n1", 00:19:08.486 "name": "xnvme_bdev" 00:19:08.486 }, 00:19:08.486 "method": "bdev_xnvme_create" 00:19:08.486 }, 00:19:08.486 { 00:19:08.486 "method": "bdev_wait_for_examine" 00:19:08.486 } 00:19:08.486 ] 00:19:08.486 } 00:19:08.486 ] 00:19:08.486 } 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:08.486 20:37:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:08.486 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:08.486 fio-3.35 00:19:08.486 Starting 1 thread 00:19:15.052 00:19:15.052 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72525: Mon Nov 25 20:37:22 2024 00:19:15.052 write: IOPS=27.2k, BW=106MiB/s (111MB/s)(531MiB/5002msec); 0 zone resets 00:19:15.052 slat (nsec): min=2491, max=68380, avg=6677.77, stdev=2602.48 00:19:15.052 clat (usec): min=993, max=3672, avg=2090.54, stdev=356.96 00:19:15.052 lat (usec): min=997, max=3711, avg=2097.21, stdev=358.32 00:19:15.052 clat percentiles (usec): 00:19:15.052 | 1.00th=[ 1221], 5.00th=[ 1500], 10.00th=[ 1598], 20.00th=[ 1745], 00:19:15.052 | 30.00th=[ 1909], 40.00th=[ 2024], 50.00th=[ 2114], 60.00th=[ 2212], 00:19:15.052 | 70.00th=[ 2311], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 2638], 00:19:15.052 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 3294], 99.95th=[ 3392], 00:19:15.052 | 99.99th=[ 3556] 00:19:15.052 bw ( KiB/s): min=96768, max=123904, per=99.21%, avg=107896.44, stdev=11233.16, samples=9 00:19:15.052 iops : min=24192, max=30976, avg=26974.11, stdev=2808.29, samples=9 00:19:15.052 lat (usec) : 1000=0.01% 00:19:15.052 lat (msec) : 2=37.06%, 4=62.94% 00:19:15.052 cpu : usr=48.89%, sys=47.31%, ctx=7, majf=0, minf=762 00:19:15.052 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:15.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.052 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:15.052 issued rwts: total=0,136000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.052 00:19:15.052 Run status group 0 (all jobs): 00:19:15.052 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=531MiB (557MB), run=5002-5002msec 00:19:15.991 ----------------------------------------------------- 00:19:15.991 Suppressions used: 00:19:15.991 count bytes template 00:19:15.991 1 11 /usr/src/fio/parse.c 00:19:15.991 1 8 libtcmalloc_minimal.so 00:19:15.991 1 904 libcrypto.so 00:19:15.991 ----------------------------------------------------- 00:19:15.991 00:19:15.991 00:19:15.991 real 0m15.165s 00:19:15.991 user 0m8.936s 00:19:15.991 sys 0m5.566s 00:19:15.991 20:37:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.991 20:37:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:15.991 ************************************ 00:19:15.991 END TEST xnvme_fio_plugin 00:19:15.991 ************************************ 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:15.991 20:37:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:15.991 20:37:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.991 20:37:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.991 20:37:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.991 ************************************ 00:19:15.991 START TEST xnvme_rpc 00:19:15.991 ************************************ 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:15.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72613 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72613 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72613 ']' 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.991 20:37:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.991 [2024-11-25 20:37:24.092503] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:19:15.991 [2024-11-25 20:37:24.093507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72613 ] 00:19:16.250 [2024-11-25 20:37:24.277599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.509 [2024-11-25 20:37:24.417144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 xnvme_bdev 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72613 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72613 ']' 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72613 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72613 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.705 killing process with pid 72613 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72613' 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72613 00:19:17.705 20:37:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72613 00:19:20.242 00:19:20.242 real 0m4.347s 00:19:20.242 user 0m4.208s 00:19:20.242 sys 0m0.722s 00:19:20.242 20:37:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.242 20:37:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.242 ************************************ 00:19:20.242 END TEST xnvme_rpc 00:19:20.242 ************************************ 00:19:20.502 20:37:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:20.502 20:37:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.502 20:37:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.502 20:37:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.502 ************************************ 00:19:20.502 START TEST xnvme_bdevperf 00:19:20.502 ************************************ 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:20.502 20:37:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:20.502 { 00:19:20.502 "subsystems": [ 00:19:20.502 { 00:19:20.502 "subsystem": "bdev", 00:19:20.502 "config": [ 00:19:20.502 { 00:19:20.502 "params": { 00:19:20.502 "io_mechanism": "io_uring_cmd", 00:19:20.502 "conserve_cpu": false, 00:19:20.502 "filename": "/dev/ng0n1", 00:19:20.502 "name": "xnvme_bdev" 00:19:20.502 }, 00:19:20.502 "method": "bdev_xnvme_create" 00:19:20.502 }, 00:19:20.502 { 00:19:20.502 "method": "bdev_wait_for_examine" 00:19:20.502 } 00:19:20.502 ] 00:19:20.502 } 00:19:20.502 ] 00:19:20.502 } 00:19:20.502 [2024-11-25 20:37:28.497615] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:19:20.502 [2024-11-25 20:37:28.497787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72698 ] 00:19:20.780 [2024-11-25 20:37:28.685522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.780 [2024-11-25 20:37:28.830195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.347 Running I/O for 5 seconds... 00:19:23.220 30208.00 IOPS, 118.00 MiB/s [2024-11-25T20:37:32.292Z] 32192.00 IOPS, 125.75 MiB/s [2024-11-25T20:37:33.686Z] 31018.67 IOPS, 121.17 MiB/s [2024-11-25T20:37:34.253Z] 30080.00 IOPS, 117.50 MiB/s 00:19:26.117 Latency(us) 00:19:26.117 [2024-11-25T20:37:34.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.117 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:26.117 xnvme_bdev : 5.00 29925.56 116.90 0.00 0.00 2132.35 1006.73 7685.35 00:19:26.117 [2024-11-25T20:37:34.253Z] =================================================================================================================== 00:19:26.117 [2024-11-25T20:37:34.253Z] Total : 29925.56 116.90 0.00 0.00 2132.35 1006.73 7685.35 00:19:27.495 20:37:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:27.495 20:37:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:27.495 20:37:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:27.495 20:37:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:27.495 20:37:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:27.495 { 00:19:27.495 "subsystems": [ 00:19:27.495 { 00:19:27.495 "subsystem": "bdev", 00:19:27.495 "config": [ 00:19:27.495 { 00:19:27.495 "params": { 00:19:27.495 "io_mechanism": "io_uring_cmd", 00:19:27.495 "conserve_cpu": false, 00:19:27.495 "filename": "/dev/ng0n1", 00:19:27.495 "name": "xnvme_bdev" 00:19:27.495 }, 00:19:27.495 "method": "bdev_xnvme_create" 00:19:27.495 }, 00:19:27.495 { 00:19:27.495 "method": "bdev_wait_for_examine" 00:19:27.495 } 00:19:27.495 ] 00:19:27.495 } 00:19:27.495 ] 00:19:27.495 } 00:19:27.495 [2024-11-25 20:37:35.555230] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:19:27.495 [2024-11-25 20:37:35.555362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72778 ] 00:19:27.755 [2024-11-25 20:37:35.737984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.755 [2024-11-25 20:37:35.877435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.323 Running I/O for 5 seconds... 00:19:30.194 30592.00 IOPS, 119.50 MiB/s [2024-11-25T20:37:39.301Z] 28384.00 IOPS, 110.88 MiB/s [2024-11-25T20:37:40.679Z] 27647.67 IOPS, 108.00 MiB/s [2024-11-25T20:37:41.617Z] 27983.75 IOPS, 109.31 MiB/s 00:19:33.481 Latency(us) 00:19:33.481 [2024-11-25T20:37:41.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.481 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:33.481 xnvme_bdev : 5.00 29872.16 116.69 0.00 0.00 2135.61 1000.15 7685.35 00:19:33.481 [2024-11-25T20:37:41.617Z] =================================================================================================================== 00:19:33.481 [2024-11-25T20:37:41.617Z] Total : 29872.16 116.69 0.00 0.00 2135.61 1000.15 7685.35 00:19:34.417 20:37:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:34.417 20:37:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:34.417 20:37:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:34.417 20:37:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:34.417 20:37:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:34.677 { 00:19:34.677 "subsystems": [ 00:19:34.677 { 00:19:34.677 "subsystem": "bdev", 00:19:34.677 "config": [ 00:19:34.677 { 00:19:34.677 "params": { 00:19:34.677 "io_mechanism": "io_uring_cmd", 00:19:34.677 "conserve_cpu": false, 00:19:34.677 "filename": "/dev/ng0n1", 00:19:34.677 "name": "xnvme_bdev" 00:19:34.677 }, 00:19:34.677 "method": "bdev_xnvme_create" 00:19:34.677 }, 00:19:34.677 { 00:19:34.677 "method": "bdev_wait_for_examine" 00:19:34.677 } 00:19:34.677 ] 00:19:34.677 } 00:19:34.677 ] 00:19:34.677 } 00:19:34.677 [2024-11-25 20:37:42.618522] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:19:34.677 [2024-11-25 20:37:42.618667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:19:34.677 [2024-11-25 20:37:42.802188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.935 [2024-11-25 20:37:42.937739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.501 Running I/O for 5 seconds... 00:19:37.368 70464.00 IOPS, 275.25 MiB/s [2024-11-25T20:37:46.441Z] 70592.00 IOPS, 275.75 MiB/s [2024-11-25T20:37:47.376Z] 70378.67 IOPS, 274.92 MiB/s [2024-11-25T20:37:48.749Z] 70096.00 IOPS, 273.81 MiB/s [2024-11-25T20:37:48.749Z] 70297.60 IOPS, 274.60 MiB/s 00:19:40.613 Latency(us) 00:19:40.613 [2024-11-25T20:37:48.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.613 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:40.613 xnvme_bdev : 5.00 70282.17 274.54 0.00 0.00 907.80 559.29 2605.65 00:19:40.613 [2024-11-25T20:37:48.749Z] =================================================================================================================== 00:19:40.613 [2024-11-25T20:37:48.749Z] Total : 70282.17 274.54 0.00 0.00 907.80 559.29 2605.65 00:19:41.575 20:37:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:41.575 20:37:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:41.575 20:37:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:41.575 20:37:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:41.575 20:37:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:41.575 { 00:19:41.575 "subsystems": [ 00:19:41.575 { 00:19:41.575 "subsystem": "bdev", 00:19:41.575 "config": [ 00:19:41.575 { 00:19:41.575 "params": { 00:19:41.575 "io_mechanism": "io_uring_cmd", 00:19:41.575 "conserve_cpu": false, 00:19:41.575 "filename": "/dev/ng0n1", 00:19:41.575 "name": "xnvme_bdev" 00:19:41.575 }, 00:19:41.575 "method": "bdev_xnvme_create" 00:19:41.575 }, 00:19:41.575 { 00:19:41.575 "method": "bdev_wait_for_examine" 00:19:41.575 } 00:19:41.575 ] 00:19:41.576 } 00:19:41.576 ] 00:19:41.576 } 00:19:41.576 [2024-11-25 20:37:49.642818] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:19:41.576 [2024-11-25 20:37:49.642993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:19:41.834 [2024-11-25 20:37:49.827287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.834 [2024-11-25 20:37:49.963566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.401 Running I/O for 5 seconds... 00:19:44.271 6376.00 IOPS, 24.91 MiB/s [2024-11-25T20:37:53.784Z] 12645.50 IOPS, 49.40 MiB/s [2024-11-25T20:37:54.721Z] 19162.33 IOPS, 74.85 MiB/s [2024-11-25T20:37:55.660Z] 22757.00 IOPS, 88.89 MiB/s [2024-11-25T20:37:55.660Z] 25072.80 IOPS, 97.94 MiB/s 00:19:47.524 Latency(us) 00:19:47.524 [2024-11-25T20:37:55.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.524 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:47.524 xnvme_bdev : 5.00 25055.71 97.87 0.00 0.00 2545.99 76.90 61482.77 00:19:47.524 [2024-11-25T20:37:55.660Z] =================================================================================================================== 00:19:47.524 [2024-11-25T20:37:55.660Z] Total : 25055.71 97.87 0.00 0.00 2545.99 76.90 61482.77 00:19:48.463 00:19:48.463 real 0m28.174s 00:19:48.463 user 0m14.551s 00:19:48.463 sys 0m13.176s 00:19:48.463 20:37:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.464 20:37:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:48.464 ************************************ 00:19:48.464 END TEST xnvme_bdevperf 00:19:48.464 ************************************ 00:19:48.723 20:37:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:48.724 20:37:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.724 20:37:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.724 20:37:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:48.724 ************************************ 00:19:48.724 START TEST xnvme_fio_plugin 00:19:48.724 ************************************ 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:48.724 20:37:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:48.724 { 00:19:48.724 "subsystems": [ 00:19:48.724 { 00:19:48.724 "subsystem": "bdev", 00:19:48.724 "config": [ 00:19:48.724 { 00:19:48.724 "params": { 00:19:48.724 "io_mechanism": "io_uring_cmd", 00:19:48.724 "conserve_cpu": false, 00:19:48.724 "filename": "/dev/ng0n1", 00:19:48.724 "name": "xnvme_bdev" 00:19:48.724 }, 00:19:48.724 "method": "bdev_xnvme_create" 00:19:48.724 }, 00:19:48.724 { 00:19:48.724 "method": "bdev_wait_for_examine" 00:19:48.724 } 00:19:48.724 ] 00:19:48.724 } 00:19:48.724 ] 00:19:48.724 } 00:19:48.983 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:48.983 fio-3.35 00:19:48.983 Starting 1 thread 00:19:55.596 00:19:55.596 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73067: Mon Nov 25 20:38:02 2024 00:19:55.596 read: IOPS=27.5k, BW=108MiB/s (113MB/s)(538MiB/5002msec) 00:19:55.596 slat (usec): min=2, max=1511, avg= 6.86, stdev= 5.27 00:19:55.596 clat (usec): min=1033, max=5129, avg=2052.97, stdev=342.04 00:19:55.596 lat (usec): min=1036, max=5139, avg=2059.83, stdev=343.20 00:19:55.596 clat percentiles (usec): 00:19:55.596 | 1.00th=[ 1188], 5.00th=[ 1418], 10.00th=[ 1631], 20.00th=[ 1795], 00:19:55.596 | 30.00th=[ 1893], 40.00th=[ 1991], 50.00th=[ 2073], 60.00th=[ 2147], 00:19:55.596 | 70.00th=[ 2245], 80.00th=[ 2343], 90.00th=[ 2474], 95.00th=[ 2540], 00:19:55.596 | 99.00th=[ 2802], 99.50th=[ 2933], 99.90th=[ 4015], 99.95th=[ 4686], 00:19:55.596 | 99.99th=[ 5014] 00:19:55.596 bw ( KiB/s): min=98816, max=130048, per=99.02%, avg=109112.89, stdev=9142.62, samples=9 00:19:55.596 iops : min=24704, max=32512, avg=27278.22, stdev=2285.66, samples=9 00:19:55.596 lat (msec) : 2=41.63%, 4=58.27%, 10=0.10% 00:19:55.596 cpu : usr=38.01%, sys=60.55%, ctx=36, majf=0, minf=762 00:19:55.596 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:55.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.596 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:55.596 issued rwts: total=137792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.596 00:19:55.596 Run status group 0 (all jobs): 00:19:55.596 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=538MiB (564MB), run=5002-5002msec 00:19:56.164 ----------------------------------------------------- 00:19:56.164 Suppressions used: 00:19:56.164 count bytes template 00:19:56.164 1 11 /usr/src/fio/parse.c 00:19:56.164 1 8 libtcmalloc_minimal.so 00:19:56.164 1 904 libcrypto.so 00:19:56.164 ----------------------------------------------------- 00:19:56.164 00:19:56.164 20:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:56.164 20:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:56.164 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:56.164 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:56.165 20:38:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:56.165 { 00:19:56.165 "subsystems": [ 00:19:56.165 { 00:19:56.165 "subsystem": "bdev", 00:19:56.165 "config": [ 00:19:56.165 { 00:19:56.165 "params": { 00:19:56.165 "io_mechanism": "io_uring_cmd", 00:19:56.165 "conserve_cpu": false, 00:19:56.165 "filename": "/dev/ng0n1", 00:19:56.165 "name": "xnvme_bdev" 00:19:56.165 }, 00:19:56.165 "method": "bdev_xnvme_create" 00:19:56.165 }, 00:19:56.165 { 00:19:56.165 "method": "bdev_wait_for_examine" 00:19:56.165 } 00:19:56.165 ] 00:19:56.165 } 00:19:56.165 ] 00:19:56.165 } 00:19:56.423 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:56.423 fio-3.35 00:19:56.423 Starting 1 thread 00:20:03.023 00:20:03.023 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73158: Mon Nov 25 20:38:10 2024 00:20:03.023 write: IOPS=26.5k, BW=104MiB/s (109MB/s)(518MiB/5001msec); 0 zone resets 00:20:03.023 slat (nsec): min=2414, max=64631, avg=7377.54, stdev=2507.76 00:20:03.023 clat (usec): min=279, max=3340, avg=2125.75, stdev=304.87 00:20:03.023 lat (usec): min=288, max=3350, avg=2133.13, stdev=306.01 00:20:03.023 clat percentiles (usec): 00:20:03.023 | 1.00th=[ 1156], 5.00th=[ 1614], 10.00th=[ 1762], 20.00th=[ 1909], 00:20:03.023 | 30.00th=[ 1991], 40.00th=[ 2073], 50.00th=[ 2147], 60.00th=[ 2212], 00:20:03.023 | 70.00th=[ 2311], 80.00th=[ 2376], 90.00th=[ 2507], 95.00th=[ 2573], 00:20:03.023 | 99.00th=[ 2704], 99.50th=[ 2769], 99.90th=[ 2966], 99.95th=[ 3064], 00:20:03.023 | 99.99th=[ 3261] 00:20:03.023 bw ( KiB/s): min=96768, max=123904, per=99.86%, avg=105870.22, stdev=9560.76, samples=9 00:20:03.023 iops : min=24192, max=30976, avg=26467.56, stdev=2390.19, samples=9 00:20:03.023 lat (usec) : 500=0.01%, 1000=0.10% 00:20:03.023 lat (msec) : 2=30.22%, 4=69.68% 00:20:03.023 cpu : usr=39.46%, sys=59.42%, ctx=13, majf=0, minf=762 00:20:03.023 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:03.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.023 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:03.023 issued rwts: total=0,132550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.023 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:03.023 00:20:03.023 Run status group 0 (all jobs): 00:20:03.023 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=518MiB (543MB), run=5001-5001msec 00:20:03.960 ----------------------------------------------------- 00:20:03.960 Suppressions used: 00:20:03.960 count bytes template 00:20:03.960 1 11 /usr/src/fio/parse.c 00:20:03.960 1 8 libtcmalloc_minimal.so 00:20:03.960 1 904 libcrypto.so 00:20:03.960 ----------------------------------------------------- 00:20:03.960 00:20:03.960 00:20:03.960 real 0m15.139s 00:20:03.960 user 0m7.963s 00:20:03.960 sys 0m6.793s 00:20:03.960 20:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.960 20:38:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:03.960 ************************************ 00:20:03.960 END TEST xnvme_fio_plugin 00:20:03.960 ************************************ 00:20:03.960 20:38:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:03.960 20:38:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:03.960 20:38:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:03.960 20:38:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:03.960 20:38:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:03.960 20:38:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.960 20:38:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.960 ************************************ 00:20:03.960 START TEST xnvme_rpc 00:20:03.960 ************************************ 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73248 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73248 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73248 ']' 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.960 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.961 20:38:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:03.961 [2024-11-25 20:38:11.976353] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:03.961 [2024-11-25 20:38:11.976482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73248 ] 00:20:04.220 [2024-11-25 20:38:12.159625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.220 [2024-11-25 20:38:12.300584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.600 xnvme_bdev 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73248 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73248 ']' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73248 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73248 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.600 killing process with pid 73248 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73248' 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73248 00:20:05.600 20:38:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73248 00:20:08.139 00:20:08.139 real 0m4.333s 00:20:08.139 user 0m4.276s 00:20:08.139 sys 0m0.698s 00:20:08.139 20:38:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.139 20:38:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:08.139 ************************************ 00:20:08.139 END TEST xnvme_rpc 00:20:08.139 ************************************ 00:20:08.139 20:38:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:08.139 20:38:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.139 20:38:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.139 20:38:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:08.399 ************************************ 00:20:08.399 START TEST xnvme_bdevperf 00:20:08.399 ************************************ 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:08.399 20:38:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:08.399 { 00:20:08.399 "subsystems": [ 00:20:08.399 { 00:20:08.399 "subsystem": "bdev", 00:20:08.399 "config": [ 00:20:08.399 { 00:20:08.399 "params": { 00:20:08.399 "io_mechanism": "io_uring_cmd", 00:20:08.399 "conserve_cpu": true, 00:20:08.399 "filename": "/dev/ng0n1", 00:20:08.399 "name": "xnvme_bdev" 00:20:08.399 }, 00:20:08.400 "method": "bdev_xnvme_create" 00:20:08.400 }, 00:20:08.400 { 00:20:08.400 "method": "bdev_wait_for_examine" 00:20:08.400 } 00:20:08.400 ] 00:20:08.400 } 00:20:08.400 ] 00:20:08.400 } 00:20:08.400 [2024-11-25 20:38:16.378546] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:08.400 [2024-11-25 20:38:16.378680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73334 ] 00:20:08.659 [2024-11-25 20:38:16.562653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.659 [2024-11-25 20:38:16.699455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.227 Running I/O for 5 seconds... 00:20:11.101 33982.00 IOPS, 132.74 MiB/s [2024-11-25T20:38:20.174Z] 30623.00 IOPS, 119.62 MiB/s [2024-11-25T20:38:21.162Z] 30335.33 IOPS, 118.50 MiB/s [2024-11-25T20:38:22.119Z] 30207.50 IOPS, 118.00 MiB/s [2024-11-25T20:38:22.119Z] 29657.20 IOPS, 115.85 MiB/s 00:20:13.983 Latency(us) 00:20:13.983 [2024-11-25T20:38:22.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.983 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:13.983 xnvme_bdev : 5.01 29610.28 115.67 0.00 0.00 2154.87 1000.15 7474.79 00:20:13.983 [2024-11-25T20:38:22.119Z] =================================================================================================================== 00:20:13.983 [2024-11-25T20:38:22.119Z] Total : 29610.28 115.67 0.00 0.00 2154.87 1000.15 7474.79 00:20:15.362 20:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:15.362 20:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:15.362 20:38:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:15.362 20:38:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:15.362 20:38:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 { 00:20:15.362 "subsystems": [ 00:20:15.362 { 00:20:15.362 "subsystem": "bdev", 00:20:15.362 "config": [ 00:20:15.362 { 00:20:15.362 "params": { 00:20:15.362 "io_mechanism": "io_uring_cmd", 00:20:15.362 "conserve_cpu": true, 00:20:15.362 "filename": "/dev/ng0n1", 00:20:15.362 "name": "xnvme_bdev" 00:20:15.362 }, 00:20:15.362 "method": "bdev_xnvme_create" 00:20:15.362 }, 00:20:15.362 { 00:20:15.362 "method": "bdev_wait_for_examine" 00:20:15.362 } 00:20:15.362 ] 00:20:15.362 } 00:20:15.362 ] 00:20:15.362 } 00:20:15.362 [2024-11-25 20:38:23.427795] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:15.362 [2024-11-25 20:38:23.427920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73414 ] 00:20:15.621 [2024-11-25 20:38:23.611515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.621 [2024-11-25 20:38:23.748613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.190 Running I/O for 5 seconds... 00:20:18.066 26368.00 IOPS, 103.00 MiB/s [2024-11-25T20:38:27.576Z] 25664.00 IOPS, 100.25 MiB/s [2024-11-25T20:38:28.510Z] 25408.00 IOPS, 99.25 MiB/s [2024-11-25T20:38:29.448Z] 25440.00 IOPS, 99.38 MiB/s 00:20:21.312 Latency(us) 00:20:21.312 [2024-11-25T20:38:29.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.312 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:21.312 xnvme_bdev : 5.01 25278.11 98.74 0.00 0.00 2523.68 1026.47 7369.51 00:20:21.312 [2024-11-25T20:38:29.448Z] =================================================================================================================== 00:20:21.312 [2024-11-25T20:38:29.448Z] Total : 25278.11 98.74 0.00 0.00 2523.68 1026.47 7369.51 00:20:22.250 20:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:22.250 20:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:22.250 20:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:22.250 20:38:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:22.250 20:38:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:22.510 { 00:20:22.510 "subsystems": [ 00:20:22.510 { 00:20:22.510 "subsystem": "bdev", 00:20:22.510 "config": [ 00:20:22.510 { 00:20:22.510 "params": { 00:20:22.510 "io_mechanism": "io_uring_cmd", 00:20:22.510 "conserve_cpu": true, 00:20:22.510 "filename": "/dev/ng0n1", 00:20:22.510 "name": "xnvme_bdev" 00:20:22.510 }, 00:20:22.510 "method": "bdev_xnvme_create" 00:20:22.510 }, 00:20:22.510 { 00:20:22.510 "method": "bdev_wait_for_examine" 00:20:22.510 } 00:20:22.510 ] 00:20:22.510 } 00:20:22.510 ] 00:20:22.510 } 00:20:22.510 [2024-11-25 20:38:30.476139] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:22.510 [2024-11-25 20:38:30.476258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73488 ] 00:20:22.770 [2024-11-25 20:38:30.659757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.770 [2024-11-25 20:38:30.797134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.338 Running I/O for 5 seconds... 00:20:25.212 70656.00 IOPS, 276.00 MiB/s [2024-11-25T20:38:34.286Z] 70912.00 IOPS, 277.00 MiB/s [2024-11-25T20:38:35.221Z] 71018.67 IOPS, 277.42 MiB/s [2024-11-25T20:38:36.600Z] 71104.00 IOPS, 277.75 MiB/s [2024-11-25T20:38:36.600Z] 71155.20 IOPS, 277.95 MiB/s 00:20:28.464 Latency(us) 00:20:28.464 [2024-11-25T20:38:36.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.464 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:28.464 xnvme_bdev : 5.00 71146.19 277.91 0.00 0.00 896.81 430.98 2434.57 00:20:28.464 [2024-11-25T20:38:36.600Z] =================================================================================================================== 00:20:28.464 [2024-11-25T20:38:36.600Z] Total : 71146.19 277.91 0.00 0.00 896.81 430.98 2434.57 00:20:29.430 20:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:29.430 20:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:29.430 20:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:29.430 20:38:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:29.430 20:38:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:29.430 { 00:20:29.430 "subsystems": [ 00:20:29.430 { 00:20:29.430 "subsystem": "bdev", 00:20:29.430 "config": [ 00:20:29.430 { 00:20:29.430 "params": { 00:20:29.430 "io_mechanism": "io_uring_cmd", 00:20:29.430 "conserve_cpu": true, 00:20:29.430 "filename": "/dev/ng0n1", 00:20:29.430 "name": "xnvme_bdev" 00:20:29.430 }, 00:20:29.430 "method": "bdev_xnvme_create" 00:20:29.431 }, 00:20:29.431 { 00:20:29.431 "method": "bdev_wait_for_examine" 00:20:29.431 } 00:20:29.431 ] 00:20:29.431 } 00:20:29.431 ] 00:20:29.431 } 00:20:29.431 [2024-11-25 20:38:37.502257] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:29.431 [2024-11-25 20:38:37.502402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73572 ] 00:20:29.690 [2024-11-25 20:38:37.685621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.690 [2024-11-25 20:38:37.822213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.259 Running I/O for 5 seconds... 00:20:32.125 66247.00 IOPS, 258.78 MiB/s [2024-11-25T20:38:41.636Z] 64050.00 IOPS, 250.20 MiB/s [2024-11-25T20:38:42.571Z] 59133.33 IOPS, 230.99 MiB/s [2024-11-25T20:38:43.506Z] 56626.25 IOPS, 221.20 MiB/s [2024-11-25T20:38:43.506Z] 55260.60 IOPS, 215.86 MiB/s 00:20:35.370 Latency(us) 00:20:35.370 [2024-11-25T20:38:43.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.370 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:35.370 xnvme_bdev : 5.00 55224.66 215.72 0.00 0.00 1154.34 57.57 16739.32 00:20:35.370 [2024-11-25T20:38:43.506Z] =================================================================================================================== 00:20:35.370 [2024-11-25T20:38:43.506Z] Total : 55224.66 215.72 0.00 0.00 1154.34 57.57 16739.32 00:20:36.308 00:20:36.308 real 0m28.138s 00:20:36.308 user 0m17.668s 00:20:36.308 sys 0m8.319s 00:20:36.308 20:38:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.308 20:38:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:36.308 ************************************ 00:20:36.308 END TEST xnvme_bdevperf 00:20:36.308 ************************************ 00:20:36.568 20:38:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:36.568 20:38:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.568 20:38:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.568 20:38:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:36.568 ************************************ 00:20:36.568 START TEST xnvme_fio_plugin 00:20:36.568 ************************************ 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:36.568 { 00:20:36.568 "subsystems": [ 00:20:36.568 { 00:20:36.568 "subsystem": "bdev", 00:20:36.568 "config": [ 00:20:36.568 { 00:20:36.568 "params": { 00:20:36.568 "io_mechanism": "io_uring_cmd", 00:20:36.568 "conserve_cpu": true, 00:20:36.568 "filename": "/dev/ng0n1", 00:20:36.568 "name": "xnvme_bdev" 00:20:36.568 }, 00:20:36.568 "method": "bdev_xnvme_create" 00:20:36.568 }, 00:20:36.568 { 00:20:36.568 "method": "bdev_wait_for_examine" 00:20:36.568 } 00:20:36.568 ] 00:20:36.568 } 00:20:36.568 ] 00:20:36.568 } 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.568 20:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:36.827 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:36.827 fio-3.35 00:20:36.827 Starting 1 thread 00:20:43.397 00:20:43.397 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73696: Mon Nov 25 20:38:50 2024 00:20:43.397 read: IOPS=29.9k, BW=117MiB/s (123MB/s)(585MiB/5001msec) 00:20:43.397 slat (usec): min=2, max=239, avg= 6.15, stdev= 2.37 00:20:43.397 clat (usec): min=916, max=4601, avg=1897.97, stdev=319.28 00:20:43.397 lat (usec): min=919, max=4607, avg=1904.11, stdev=320.43 00:20:43.397 clat percentiles (usec): 00:20:43.397 | 1.00th=[ 1172], 5.00th=[ 1434], 10.00th=[ 1532], 20.00th=[ 1631], 00:20:43.397 | 30.00th=[ 1713], 40.00th=[ 1795], 50.00th=[ 1876], 60.00th=[ 1958], 00:20:43.397 | 70.00th=[ 2057], 80.00th=[ 2180], 90.00th=[ 2311], 95.00th=[ 2442], 00:20:43.397 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3294], 99.95th=[ 3523], 00:20:43.397 | 99.99th=[ 4490] 00:20:43.397 bw ( KiB/s): min=102912, max=134656, per=99.85%, avg=119498.22, stdev=8658.14, samples=9 00:20:43.397 iops : min=25728, max=33664, avg=29874.56, stdev=2164.53, samples=9 00:20:43.397 lat (usec) : 1000=0.04% 00:20:43.397 lat (msec) : 2=64.55%, 4=35.36%, 10=0.04% 00:20:43.397 cpu : usr=49.98%, sys=47.14%, ctx=14, majf=0, minf=762 00:20:43.397 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:43.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.397 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:43.397 issued rwts: total=149632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.397 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:43.397 00:20:43.397 Run status group 0 (all jobs): 00:20:43.397 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=585MiB (613MB), run=5001-5001msec 00:20:43.966 ----------------------------------------------------- 00:20:43.966 Suppressions used: 00:20:43.966 count bytes template 00:20:43.966 1 11 /usr/src/fio/parse.c 00:20:43.966 1 8 libtcmalloc_minimal.so 00:20:43.966 1 904 libcrypto.so 00:20:43.966 ----------------------------------------------------- 00:20:43.966 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:43.966 20:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:43.966 { 00:20:43.966 "subsystems": [ 00:20:43.966 { 00:20:43.966 "subsystem": "bdev", 00:20:43.966 "config": [ 00:20:43.966 { 00:20:43.966 "params": { 00:20:43.966 "io_mechanism": "io_uring_cmd", 00:20:43.966 "conserve_cpu": true, 00:20:43.966 "filename": "/dev/ng0n1", 00:20:43.966 "name": "xnvme_bdev" 00:20:43.966 }, 00:20:43.966 "method": "bdev_xnvme_create" 00:20:43.966 }, 00:20:43.966 { 00:20:43.966 "method": "bdev_wait_for_examine" 00:20:43.966 } 00:20:43.966 ] 00:20:43.966 } 00:20:43.966 ] 00:20:43.966 } 00:20:44.226 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:44.226 fio-3.35 00:20:44.226 Starting 1 thread 00:20:50.819 00:20:50.819 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73788: Mon Nov 25 20:38:57 2024 00:20:50.819 write: IOPS=24.8k, BW=96.8MiB/s (102MB/s)(484MiB/5002msec); 0 zone resets 00:20:50.819 slat (usec): min=2, max=164, avg= 8.19, stdev= 2.56 00:20:50.819 clat (usec): min=955, max=4328, avg=2261.52, stdev=280.31 00:20:50.819 lat (usec): min=958, max=4336, avg=2269.71, stdev=281.10 00:20:50.819 clat percentiles (usec): 00:20:50.819 | 1.00th=[ 1172], 5.00th=[ 1860], 10.00th=[ 2008], 20.00th=[ 2089], 00:20:50.819 | 30.00th=[ 2147], 40.00th=[ 2212], 50.00th=[ 2278], 60.00th=[ 2343], 00:20:50.819 | 70.00th=[ 2409], 80.00th=[ 2474], 90.00th=[ 2540], 95.00th=[ 2606], 00:20:50.819 | 99.00th=[ 2704], 99.50th=[ 2802], 99.90th=[ 3490], 99.95th=[ 3982], 00:20:50.819 | 99.99th=[ 4293] 00:20:50.819 bw ( KiB/s): min=94936, max=118272, per=100.00%, avg=99384.89, stdev=7140.94, samples=9 00:20:50.819 iops : min=23736, max=29568, avg=24846.00, stdev=1785.21, samples=9 00:20:50.819 lat (usec) : 1000=0.05% 00:20:50.819 lat (msec) : 2=9.35%, 4=90.55%, 10=0.05% 00:20:50.819 cpu : usr=47.37%, sys=49.75%, ctx=8, majf=0, minf=762 00:20:50.819 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:50.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.819 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:50.819 issued rwts: total=0,123968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:50.819 00:20:50.819 Run status group 0 (all jobs): 00:20:50.819 WRITE: bw=96.8MiB/s (102MB/s), 96.8MiB/s-96.8MiB/s (102MB/s-102MB/s), io=484MiB (508MB), run=5002-5002msec 00:20:51.387 ----------------------------------------------------- 00:20:51.387 Suppressions used: 00:20:51.387 count bytes template 00:20:51.387 1 11 /usr/src/fio/parse.c 00:20:51.387 1 8 libtcmalloc_minimal.so 00:20:51.387 1 904 libcrypto.so 00:20:51.387 ----------------------------------------------------- 00:20:51.387 00:20:51.387 00:20:51.387 real 0m14.916s 00:20:51.387 user 0m8.797s 00:20:51.387 sys 0m5.574s 00:20:51.387 20:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.387 ************************************ 00:20:51.387 END TEST xnvme_fio_plugin 00:20:51.387 ************************************ 00:20:51.387 20:38:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:51.387 20:38:59 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73248 00:20:51.387 20:38:59 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73248 ']' 00:20:51.387 20:38:59 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73248 00:20:51.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73248) - No such process 00:20:51.387 Process with pid 73248 is not found 00:20:51.387 20:38:59 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73248 is not found' 00:20:51.387 00:20:51.387 real 3m59.048s 00:20:51.387 user 2m12.396s 00:20:51.387 sys 1m30.726s 00:20:51.387 20:38:59 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.387 20:38:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.387 ************************************ 00:20:51.387 END TEST nvme_xnvme 00:20:51.387 ************************************ 00:20:51.388 20:38:59 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:51.388 20:38:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.388 20:38:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.388 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:51.647 ************************************ 00:20:51.647 START TEST blockdev_xnvme 00:20:51.647 ************************************ 00:20:51.647 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:51.647 * Looking for test storage... 00:20:51.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:51.647 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.647 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.647 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.647 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.647 20:38:59 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.648 20:38:59 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:51.648 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.648 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.648 --rc genhtml_branch_coverage=1 00:20:51.648 --rc genhtml_function_coverage=1 00:20:51.648 --rc genhtml_legend=1 00:20:51.648 --rc geninfo_all_blocks=1 00:20:51.648 --rc geninfo_unexecuted_blocks=1 00:20:51.648 00:20:51.648 ' 00:20:51.648 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.648 --rc genhtml_branch_coverage=1 00:20:51.648 --rc genhtml_function_coverage=1 00:20:51.648 --rc genhtml_legend=1 00:20:51.648 --rc geninfo_all_blocks=1 00:20:51.648 --rc geninfo_unexecuted_blocks=1 00:20:51.648 00:20:51.648 ' 00:20:51.648 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.648 --rc genhtml_branch_coverage=1 00:20:51.648 --rc genhtml_function_coverage=1 00:20:51.648 --rc genhtml_legend=1 00:20:51.648 --rc geninfo_all_blocks=1 00:20:51.648 --rc geninfo_unexecuted_blocks=1 00:20:51.648 00:20:51.648 ' 00:20:51.648 20:38:59 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.648 --rc genhtml_branch_coverage=1 00:20:51.648 --rc genhtml_function_coverage=1 00:20:51.648 --rc genhtml_legend=1 00:20:51.648 --rc geninfo_all_blocks=1 00:20:51.648 --rc geninfo_unexecuted_blocks=1 00:20:51.648 00:20:51.648 ' 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:51.648 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73928 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73928 00:20:51.907 20:38:59 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73928 ']' 00:20:51.907 20:38:59 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:51.907 20:38:59 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.907 20:38:59 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.907 20:38:59 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.907 20:38:59 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.907 20:38:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.907 [2024-11-25 20:38:59.899534] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:51.907 [2024-11-25 20:38:59.899889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73928 ] 00:20:52.166 [2024-11-25 20:39:00.084398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.166 [2024-11-25 20:39:00.222044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.542 20:39:01 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.542 20:39:01 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:20:53.542 20:39:01 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:53.542 20:39:01 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:20:53.542 20:39:01 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:53.542 20:39:01 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:53.542 20:39:01 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:54.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.676 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:54.676 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:54.676 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:54.676 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:54.676 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:54.676 20:39:02 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:54.676 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:54.677 20:39:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.677 20:39:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.677 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:20:54.677 nvme0n1 00:20:54.677 nvme0n2 00:20:54.677 nvme0n3 00:20:54.677 nvme1n1 00:20:54.935 nvme2n1 00:20:54.935 nvme3n1 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:54.935 20:39:02 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.935 20:39:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "65c94bb1-6d93-4e79-824a-ae51631006c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65c94bb1-6d93-4e79-824a-ae51631006c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "916ae136-7064-4a89-8b6b-f5c5e475848b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "916ae136-7064-4a89-8b6b-f5c5e475848b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "fbd53d60-ccaa-4fc0-935f-783914eb6008"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fbd53d60-ccaa-4fc0-935f-783914eb6008",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "429f53d7-5fcb-4ce2-8332-2cde57bbfcb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "429f53d7-5fcb-4ce2-8332-2cde57bbfcb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9e93f49d-be98-4fff-be0c-f8aa2b41e16f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9e93f49d-be98-4fff-be0c-f8aa2b41e16f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "694e98f5-28b9-43c2-a34c-2680089a95c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "694e98f5-28b9-43c2-a34c-2680089a95c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:54.935 20:39:03 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73928 00:20:54.935 20:39:03 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73928 ']' 00:20:54.935 20:39:03 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73928 00:20:54.936 20:39:03 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:20:54.936 20:39:03 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.936 20:39:03 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73928 00:20:55.194 killing process with pid 73928 00:20:55.194 20:39:03 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.194 20:39:03 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.194 20:39:03 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73928' 00:20:55.194 20:39:03 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73928 00:20:55.194 20:39:03 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73928 00:20:57.749 20:39:05 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:57.749 20:39:05 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:57.749 20:39:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:57.749 20:39:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.749 20:39:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:57.749 ************************************ 00:20:57.749 START TEST bdev_hello_world 00:20:57.749 ************************************ 00:20:57.749 20:39:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:57.749 [2024-11-25 20:39:05.789577] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:57.749 [2024-11-25 20:39:05.789903] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74230 ] 00:20:58.007 [2024-11-25 20:39:05.971706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.007 [2024-11-25 20:39:06.106653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.572 [2024-11-25 20:39:06.590608] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:58.572 [2024-11-25 20:39:06.590889] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:58.572 [2024-11-25 20:39:06.590920] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:58.572 [2024-11-25 20:39:06.593393] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:58.572 [2024-11-25 20:39:06.593799] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:58.572 [2024-11-25 20:39:06.593824] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:58.572 [2024-11-25 20:39:06.594146] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:58.572 00:20:58.572 [2024-11-25 20:39:06.594213] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:59.945 00:20:59.945 ************************************ 00:20:59.945 END TEST bdev_hello_world 00:20:59.945 ************************************ 00:20:59.945 real 0m2.087s 00:20:59.945 user 0m1.651s 00:20:59.945 sys 0m0.318s 00:20:59.945 20:39:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.945 20:39:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:59.945 20:39:07 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:59.945 20:39:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.945 20:39:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.945 20:39:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:59.945 ************************************ 00:20:59.945 START TEST bdev_bounds 00:20:59.945 ************************************ 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74272 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74272' 00:20:59.945 Process bdevio pid: 74272 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74272 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74272 ']' 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.945 20:39:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:59.945 [2024-11-25 20:39:07.963603] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:20:59.945 [2024-11-25 20:39:07.963942] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74272 ] 00:21:00.204 [2024-11-25 20:39:08.149674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.204 [2024-11-25 20:39:08.281474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.204 [2024-11-25 20:39:08.281630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.204 [2024-11-25 20:39:08.281696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.773 20:39:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.773 20:39:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:00.773 20:39:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:00.773 I/O targets: 00:21:00.773 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:00.773 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:00.773 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:00.773 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:00.773 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:00.773 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:21:00.773 00:21:00.773 00:21:00.773 CUnit - A unit testing framework for C - Version 2.1-3 00:21:00.773 http://cunit.sourceforge.net/ 00:21:00.773 00:21:00.773 00:21:00.773 Suite: bdevio tests on: nvme3n1 00:21:00.773 Test: blockdev write read block ...passed 00:21:00.773 Test: blockdev write zeroes read block ...passed 00:21:00.773 Test: blockdev write zeroes read no split ...passed 00:21:01.032 Test: blockdev write zeroes read split ...passed 00:21:01.032 Test: blockdev write zeroes read split partial ...passed 00:21:01.032 Test: blockdev reset ...passed 00:21:01.032 Test: blockdev write read 8 blocks ...passed 00:21:01.032 Test: blockdev write read size > 128k ...passed 00:21:01.032 Test: blockdev write read invalid size ...passed 00:21:01.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.032 Test: blockdev write read max offset ...passed 00:21:01.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.032 Test: blockdev writev readv 8 blocks ...passed 00:21:01.032 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.032 Test: blockdev writev readv block ...passed 00:21:01.032 Test: blockdev writev readv size > 128k ...passed 00:21:01.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.032 Test: blockdev comparev and writev ...passed 00:21:01.032 Test: blockdev nvme passthru rw ...passed 00:21:01.032 Test: blockdev nvme passthru vendor specific ...passed 00:21:01.032 Test: blockdev nvme admin passthru ...passed 00:21:01.032 Test: blockdev copy ...passed 00:21:01.032 Suite: bdevio tests on: nvme2n1 00:21:01.032 Test: blockdev write read block ...passed 00:21:01.032 Test: blockdev write zeroes read block ...passed 00:21:01.032 Test: blockdev write zeroes read no split ...passed 00:21:01.032 Test: blockdev write zeroes read split ...passed 00:21:01.032 Test: blockdev write zeroes read split partial ...passed 00:21:01.032 Test: blockdev reset ...passed 00:21:01.032 Test: blockdev write read 8 blocks ...passed 00:21:01.032 Test: blockdev write read size > 128k ...passed 00:21:01.032 Test: blockdev write read invalid size ...passed 00:21:01.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.032 Test: blockdev write read max offset ...passed 00:21:01.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.032 Test: blockdev writev readv 8 blocks ...passed 00:21:01.032 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.032 Test: blockdev writev readv block ...passed 00:21:01.032 Test: blockdev writev readv size > 128k ...passed 00:21:01.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.032 Test: blockdev comparev and writev ...passed 00:21:01.032 Test: blockdev nvme passthru rw ...passed 00:21:01.032 Test: blockdev nvme passthru vendor specific ...passed 00:21:01.032 Test: blockdev nvme admin passthru ...passed 00:21:01.032 Test: blockdev copy ...passed 00:21:01.032 Suite: bdevio tests on: nvme1n1 00:21:01.032 Test: blockdev write read block ...passed 00:21:01.032 Test: blockdev write zeroes read block ...passed 00:21:01.032 Test: blockdev write zeroes read no split ...passed 00:21:01.032 Test: blockdev write zeroes read split ...passed 00:21:01.032 Test: blockdev write zeroes read split partial ...passed 00:21:01.032 Test: blockdev reset ...passed 00:21:01.032 Test: blockdev write read 8 blocks ...passed 00:21:01.032 Test: blockdev write read size > 128k ...passed 00:21:01.032 Test: blockdev write read invalid size ...passed 00:21:01.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.032 Test: blockdev write read max offset ...passed 00:21:01.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.032 Test: blockdev writev readv 8 blocks ...passed 00:21:01.032 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.032 Test: blockdev writev readv block ...passed 00:21:01.032 Test: blockdev writev readv size > 128k ...passed 00:21:01.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.032 Test: blockdev comparev and writev ...passed 00:21:01.032 Test: blockdev nvme passthru rw ...passed 00:21:01.032 Test: blockdev nvme passthru vendor specific ...passed 00:21:01.032 Test: blockdev nvme admin passthru ...passed 00:21:01.032 Test: blockdev copy ...passed 00:21:01.032 Suite: bdevio tests on: nvme0n3 00:21:01.033 Test: blockdev write read block ...passed 00:21:01.033 Test: blockdev write zeroes read block ...passed 00:21:01.033 Test: blockdev write zeroes read no split ...passed 00:21:01.291 Test: blockdev write zeroes read split ...passed 00:21:01.291 Test: blockdev write zeroes read split partial ...passed 00:21:01.291 Test: blockdev reset ...passed 00:21:01.291 Test: blockdev write read 8 blocks ...passed 00:21:01.291 Test: blockdev write read size > 128k ...passed 00:21:01.291 Test: blockdev write read invalid size ...passed 00:21:01.291 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.291 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.291 Test: blockdev write read max offset ...passed 00:21:01.291 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.291 Test: blockdev writev readv 8 blocks ...passed 00:21:01.291 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.291 Test: blockdev writev readv block ...passed 00:21:01.291 Test: blockdev writev readv size > 128k ...passed 00:21:01.291 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.291 Test: blockdev comparev and writev ...passed 00:21:01.291 Test: blockdev nvme passthru rw ...passed 00:21:01.291 Test: blockdev nvme passthru vendor specific ...passed 00:21:01.291 Test: blockdev nvme admin passthru ...passed 00:21:01.291 Test: blockdev copy ...passed 00:21:01.291 Suite: bdevio tests on: nvme0n2 00:21:01.291 Test: blockdev write read block ...passed 00:21:01.291 Test: blockdev write zeroes read block ...passed 00:21:01.291 Test: blockdev write zeroes read no split ...passed 00:21:01.291 Test: blockdev write zeroes read split ...passed 00:21:01.291 Test: blockdev write zeroes read split partial ...passed 00:21:01.291 Test: blockdev reset ...passed 00:21:01.291 Test: blockdev write read 8 blocks ...passed 00:21:01.291 Test: blockdev write read size > 128k ...passed 00:21:01.291 Test: blockdev write read invalid size ...passed 00:21:01.291 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.291 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.291 Test: blockdev write read max offset ...passed 00:21:01.291 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.291 Test: blockdev writev readv 8 blocks ...passed 00:21:01.291 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.291 Test: blockdev writev readv block ...passed 00:21:01.291 Test: blockdev writev readv size > 128k ...passed 00:21:01.291 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.291 Test: blockdev comparev and writev ...passed 00:21:01.291 Test: blockdev nvme passthru rw ...passed 00:21:01.291 Test: blockdev nvme passthru vendor specific ...passed 00:21:01.291 Test: blockdev nvme admin passthru ...passed 00:21:01.291 Test: blockdev copy ...passed 00:21:01.291 Suite: bdevio tests on: nvme0n1 00:21:01.291 Test: blockdev write read block ...passed 00:21:01.291 Test: blockdev write zeroes read block ...passed 00:21:01.291 Test: blockdev write zeroes read no split ...passed 00:21:01.291 Test: blockdev write zeroes read split ...passed 00:21:01.291 Test: blockdev write zeroes read split partial ...passed 00:21:01.291 Test: blockdev reset ...passed 00:21:01.291 Test: blockdev write read 8 blocks ...passed 00:21:01.291 Test: blockdev write read size > 128k ...passed 00:21:01.291 Test: blockdev write read invalid size ...passed 00:21:01.291 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:01.291 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:01.291 Test: blockdev write read max offset ...passed 00:21:01.291 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:01.291 Test: blockdev writev readv 8 blocks ...passed 00:21:01.291 Test: blockdev writev readv 30 x 1block ...passed 00:21:01.291 Test: blockdev writev readv block ...passed 00:21:01.291 Test: blockdev writev readv size > 128k ...passed 00:21:01.291 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:01.291 Test: blockdev comparev and writev ...passed 00:21:01.291 Test: blockdev nvme passthru rw ...passed 00:21:01.291 Test: blockdev nvme passthru vendor specific ...passed 00:21:01.291 Test: blockdev nvme admin passthru ...passed 00:21:01.291 Test: blockdev copy ...passed 00:21:01.291 00:21:01.291 Run Summary: Type Total Ran Passed Failed Inactive 00:21:01.291 suites 6 6 n/a 0 0 00:21:01.291 tests 138 138 138 0 0 00:21:01.291 asserts 780 780 780 0 n/a 00:21:01.291 00:21:01.291 Elapsed time = 1.353 seconds 00:21:01.291 0 00:21:01.291 20:39:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74272 00:21:01.291 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74272 ']' 00:21:01.291 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74272 00:21:01.291 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:01.291 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.291 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74272 00:21:01.550 killing process with pid 74272 00:21:01.550 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.550 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.550 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74272' 00:21:01.550 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74272 00:21:01.550 20:39:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74272 00:21:02.931 20:39:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:02.931 00:21:02.931 real 0m2.813s 00:21:02.931 user 0m6.802s 00:21:02.931 sys 0m0.505s 00:21:02.931 20:39:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.931 ************************************ 00:21:02.931 END TEST bdev_bounds 00:21:02.931 ************************************ 00:21:02.931 20:39:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:02.931 20:39:10 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:21:02.931 20:39:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:02.931 20:39:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.931 20:39:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:02.931 ************************************ 00:21:02.931 START TEST bdev_nbd 00:21:02.931 ************************************ 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74334 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:02.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74334 /var/tmp/spdk-nbd.sock 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74334 ']' 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.931 20:39:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:02.931 [2024-11-25 20:39:10.869236] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:02.931 [2024-11-25 20:39:10.869556] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.931 [2024-11-25 20:39:11.057161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.191 [2024-11-25 20:39:11.188561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:03.759 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.018 1+0 records in 00:21:04.018 1+0 records out 00:21:04.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904567 s, 4.5 MB/s 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:04.018 20:39:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.278 1+0 records in 00:21:04.278 1+0 records out 00:21:04.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063067 s, 6.5 MB/s 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:04.278 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.537 1+0 records in 00:21:04.537 1+0 records out 00:21:04.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774515 s, 5.3 MB/s 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:04.537 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.796 1+0 records in 00:21:04.796 1+0 records out 00:21:04.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601902 s, 6.8 MB/s 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:04.796 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:04.797 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.056 1+0 records in 00:21:05.056 1+0 records out 00:21:05.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768525 s, 5.3 MB/s 00:21:05.056 20:39:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:05.056 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.315 1+0 records in 00:21:05.315 1+0 records out 00:21:05.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113314 s, 3.6 MB/s 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:05.315 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd0", 00:21:05.574 "bdev_name": "nvme0n1" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd1", 00:21:05.574 "bdev_name": "nvme0n2" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd2", 00:21:05.574 "bdev_name": "nvme0n3" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd3", 00:21:05.574 "bdev_name": "nvme1n1" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd4", 00:21:05.574 "bdev_name": "nvme2n1" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd5", 00:21:05.574 "bdev_name": "nvme3n1" 00:21:05.574 } 00:21:05.574 ]' 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd0", 00:21:05.574 "bdev_name": "nvme0n1" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd1", 00:21:05.574 "bdev_name": "nvme0n2" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd2", 00:21:05.574 "bdev_name": "nvme0n3" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd3", 00:21:05.574 "bdev_name": "nvme1n1" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd4", 00:21:05.574 "bdev_name": "nvme2n1" 00:21:05.574 }, 00:21:05.574 { 00:21:05.574 "nbd_device": "/dev/nbd5", 00:21:05.574 "bdev_name": "nvme3n1" 00:21:05.574 } 00:21:05.574 ]' 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.574 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.833 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.093 20:39:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.093 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.352 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.611 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.870 20:39:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:07.130 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:07.131 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:21:07.390 /dev/nbd0 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.390 1+0 records in 00:21:07.390 1+0 records out 00:21:07.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700854 s, 5.8 MB/s 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:07.390 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:21:07.650 /dev/nbd1 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.650 1+0 records in 00:21:07.650 1+0 records out 00:21:07.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409889 s, 10.0 MB/s 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:07.650 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:21:07.910 /dev/nbd10 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.910 1+0 records in 00:21:07.910 1+0 records out 00:21:07.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607703 s, 6.7 MB/s 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:07.910 20:39:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:21:08.169 /dev/nbd11 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.169 1+0 records in 00:21:08.169 1+0 records out 00:21:08.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660858 s, 6.2 MB/s 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:08.169 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:21:08.428 /dev/nbd12 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.428 1+0 records in 00:21:08.428 1+0 records out 00:21:08.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717385 s, 5.7 MB/s 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:08.428 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:21:08.687 /dev/nbd13 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:08.687 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.688 1+0 records in 00:21:08.688 1+0 records out 00:21:08.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794697 s, 5.2 MB/s 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:08.688 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd0", 00:21:08.947 "bdev_name": "nvme0n1" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd1", 00:21:08.947 "bdev_name": "nvme0n2" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd10", 00:21:08.947 "bdev_name": "nvme0n3" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd11", 00:21:08.947 "bdev_name": "nvme1n1" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd12", 00:21:08.947 "bdev_name": "nvme2n1" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd13", 00:21:08.947 "bdev_name": "nvme3n1" 00:21:08.947 } 00:21:08.947 ]' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd0", 00:21:08.947 "bdev_name": "nvme0n1" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd1", 00:21:08.947 "bdev_name": "nvme0n2" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd10", 00:21:08.947 "bdev_name": "nvme0n3" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd11", 00:21:08.947 "bdev_name": "nvme1n1" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd12", 00:21:08.947 "bdev_name": "nvme2n1" 00:21:08.947 }, 00:21:08.947 { 00:21:08.947 "nbd_device": "/dev/nbd13", 00:21:08.947 "bdev_name": "nvme3n1" 00:21:08.947 } 00:21:08.947 ]' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:08.947 /dev/nbd1 00:21:08.947 /dev/nbd10 00:21:08.947 /dev/nbd11 00:21:08.947 /dev/nbd12 00:21:08.947 /dev/nbd13' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:08.947 /dev/nbd1 00:21:08.947 /dev/nbd10 00:21:08.947 /dev/nbd11 00:21:08.947 /dev/nbd12 00:21:08.947 /dev/nbd13' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:08.947 256+0 records in 00:21:08.947 256+0 records out 00:21:08.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014228 s, 73.7 MB/s 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:08.947 20:39:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:09.206 256+0 records in 00:21:09.206 256+0 records out 00:21:09.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120924 s, 8.7 MB/s 00:21:09.206 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:09.206 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:09.206 256+0 records in 00:21:09.206 256+0 records out 00:21:09.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122692 s, 8.5 MB/s 00:21:09.206 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:09.206 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:09.465 256+0 records in 00:21:09.465 256+0 records out 00:21:09.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127785 s, 8.2 MB/s 00:21:09.465 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:09.465 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:09.465 256+0 records in 00:21:09.465 256+0 records out 00:21:09.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123485 s, 8.5 MB/s 00:21:09.465 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:09.465 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:09.724 256+0 records in 00:21:09.724 256+0 records out 00:21:09.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145554 s, 7.2 MB/s 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:09.724 256+0 records in 00:21:09.724 256+0 records out 00:21:09.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12804 s, 8.2 MB/s 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:09.724 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:09.983 20:39:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:09.983 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.241 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.500 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.760 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:11.018 20:39:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:11.276 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:11.276 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:11.276 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:11.277 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:11.536 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:11.796 malloc_lvol_verify 00:21:11.796 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:11.796 dbc136ba-5933-4bf0-a5d6-05e2a77b39ae 00:21:11.796 20:39:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:12.056 a5aaf26f-d284-4621-8696-38f5fc62dfec 00:21:12.057 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:12.316 /dev/nbd0 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:12.316 mke2fs 1.47.0 (5-Feb-2023) 00:21:12.316 Discarding device blocks: 0/4096 done 00:21:12.316 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:12.316 00:21:12.316 Allocating group tables: 0/1 done 00:21:12.316 Writing inode tables: 0/1 done 00:21:12.316 Creating journal (1024 blocks): done 00:21:12.316 Writing superblocks and filesystem accounting information: 0/1 done 00:21:12.316 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.316 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74334 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74334 ']' 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74334 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74334 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74334' 00:21:12.576 killing process with pid 74334 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74334 00:21:12.576 20:39:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74334 00:21:13.956 20:39:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:13.956 ************************************ 00:21:13.956 END TEST bdev_nbd 00:21:13.956 ************************************ 00:21:13.956 00:21:13.956 real 0m11.133s 00:21:13.956 user 0m14.051s 00:21:13.956 sys 0m4.933s 00:21:13.956 20:39:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.956 20:39:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:13.956 20:39:21 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:13.956 20:39:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:21:13.956 20:39:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:21:13.956 20:39:21 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:13.956 20:39:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.956 20:39:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.956 20:39:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:13.956 ************************************ 00:21:13.956 START TEST bdev_fio 00:21:13.956 ************************************ 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:13.956 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:13.956 20:39:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:13.956 ************************************ 00:21:13.956 START TEST bdev_fio_rw_verify 00:21:13.956 ************************************ 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:13.956 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:14.215 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:14.215 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:14.215 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:14.215 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:14.215 20:39:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:14.215 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:14.215 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:14.215 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:14.215 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:14.215 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:14.216 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:14.216 fio-3.35 00:21:14.216 Starting 6 threads 00:21:26.426 00:21:26.426 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74738: Mon Nov 25 20:39:33 2024 00:21:26.426 read: IOPS=34.3k, BW=134MiB/s (141MB/s)(1341MiB/10001msec) 00:21:26.426 slat (usec): min=2, max=1702, avg= 6.00, stdev= 5.53 00:21:26.426 clat (usec): min=95, max=24390, avg=522.37, stdev=234.54 00:21:26.426 lat (usec): min=98, max=24403, avg=528.37, stdev=235.25 00:21:26.426 clat percentiles (usec): 00:21:26.426 | 50.000th=[ 529], 99.000th=[ 1057], 99.900th=[ 1762], 99.990th=[ 4113], 00:21:26.426 | 99.999th=[24249] 00:21:26.426 write: IOPS=34.6k, BW=135MiB/s (142MB/s)(1352MiB/10001msec); 0 zone resets 00:21:26.426 slat (usec): min=6, max=3434, avg=24.80, stdev=35.96 00:21:26.426 clat (usec): min=81, max=4104, avg=631.51, stdev=247.57 00:21:26.426 lat (usec): min=98, max=4144, avg=656.31, stdev=254.07 00:21:26.426 clat percentiles (usec): 00:21:26.426 | 50.000th=[ 619], 99.000th=[ 1418], 99.900th=[ 1991], 99.990th=[ 2704], 00:21:26.426 | 99.999th=[ 4047] 00:21:26.426 bw ( KiB/s): min=111033, max=163832, per=100.00%, avg=138562.84, stdev=2720.99, samples=114 00:21:26.426 iops : min=27757, max=40958, avg=34640.42, stdev=680.25, samples=114 00:21:26.426 lat (usec) : 100=0.01%, 250=6.45%, 500=30.72%, 750=46.07%, 1000=12.58% 00:21:26.426 lat (msec) : 2=4.08%, 4=0.08%, 10=0.01%, 50=0.01% 00:21:26.426 cpu : usr=56.10%, sys=28.31%, ctx=8374, majf=0, minf=28322 00:21:26.426 IO depths : 1=11.5%, 2=23.8%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.426 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.426 issued rwts: total=343238,346098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:26.427 00:21:26.427 Run status group 0 (all jobs): 00:21:26.427 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=1341MiB (1406MB), run=10001-10001msec 00:21:26.427 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=1352MiB (1418MB), run=10001-10001msec 00:21:26.687 ----------------------------------------------------- 00:21:26.687 Suppressions used: 00:21:26.687 count bytes template 00:21:26.687 6 48 /usr/src/fio/parse.c 00:21:26.687 2624 251904 /usr/src/fio/iolog.c 00:21:26.687 1 8 libtcmalloc_minimal.so 00:21:26.687 1 904 libcrypto.so 00:21:26.687 ----------------------------------------------------- 00:21:26.687 00:21:26.687 00:21:26.687 real 0m12.700s 00:21:26.687 user 0m35.765s 00:21:26.687 sys 0m17.497s 00:21:26.687 20:39:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.687 ************************************ 00:21:26.687 END TEST bdev_fio_rw_verify 00:21:26.687 ************************************ 00:21:26.687 20:39:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:26.947 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "65c94bb1-6d93-4e79-824a-ae51631006c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65c94bb1-6d93-4e79-824a-ae51631006c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "916ae136-7064-4a89-8b6b-f5c5e475848b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "916ae136-7064-4a89-8b6b-f5c5e475848b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "fbd53d60-ccaa-4fc0-935f-783914eb6008"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fbd53d60-ccaa-4fc0-935f-783914eb6008",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "429f53d7-5fcb-4ce2-8332-2cde57bbfcb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "429f53d7-5fcb-4ce2-8332-2cde57bbfcb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9e93f49d-be98-4fff-be0c-f8aa2b41e16f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9e93f49d-be98-4fff-be0c-f8aa2b41e16f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "694e98f5-28b9-43c2-a34c-2680089a95c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "694e98f5-28b9-43c2-a34c-2680089a95c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:26.948 /home/vagrant/spdk_repo/spdk 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:26.948 00:21:26.948 real 0m12.944s 00:21:26.948 user 0m35.885s 00:21:26.948 sys 0m17.626s 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.948 20:39:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:26.948 ************************************ 00:21:26.948 END TEST bdev_fio 00:21:26.948 ************************************ 00:21:26.948 20:39:34 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:26.948 20:39:34 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:26.948 20:39:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:26.948 20:39:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.948 20:39:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:26.948 ************************************ 00:21:26.948 START TEST bdev_verify 00:21:26.948 ************************************ 00:21:26.948 20:39:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:27.207 [2024-11-25 20:39:35.088135] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:27.207 [2024-11-25 20:39:35.088282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74921 ] 00:21:27.207 [2024-11-25 20:39:35.275464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:27.467 [2024-11-25 20:39:35.410466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.467 [2024-11-25 20:39:35.410499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.036 Running I/O for 5 seconds... 00:21:29.984 24480.00 IOPS, 95.62 MiB/s [2024-11-25T20:39:39.498Z] 24880.00 IOPS, 97.19 MiB/s [2024-11-25T20:39:40.434Z] 25130.67 IOPS, 98.17 MiB/s [2024-11-25T20:39:41.371Z] 24864.00 IOPS, 97.12 MiB/s [2024-11-25T20:39:41.371Z] 24928.00 IOPS, 97.38 MiB/s 00:21:33.235 Latency(us) 00:21:33.235 [2024-11-25T20:39:41.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.235 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x0 length 0x80000 00:21:33.235 nvme0n1 : 5.03 1933.37 7.55 0.00 0.00 66111.94 9317.17 65693.92 00:21:33.235 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x80000 length 0x80000 00:21:33.235 nvme0n1 : 5.07 1844.64 7.21 0.00 0.00 69288.57 11843.86 67378.38 00:21:33.235 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x0 length 0x80000 00:21:33.235 nvme0n2 : 5.03 1932.85 7.55 0.00 0.00 66049.88 11159.54 71589.53 00:21:33.235 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x80000 length 0x80000 00:21:33.235 nvme0n2 : 5.05 1823.97 7.12 0.00 0.00 69985.43 12738.72 69483.95 00:21:33.235 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x0 length 0x80000 00:21:33.235 nvme0n3 : 5.04 1928.55 7.53 0.00 0.00 66115.31 13159.84 69483.95 00:21:33.235 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x80000 length 0x80000 00:21:33.235 nvme0n3 : 5.07 1842.14 7.20 0.00 0.00 69197.47 10001.48 60640.54 00:21:33.235 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x0 length 0x20000 00:21:33.235 nvme1n1 : 5.05 1927.92 7.53 0.00 0.00 66062.53 11159.54 71168.41 00:21:33.235 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x20000 length 0x20000 00:21:33.235 nvme1n1 : 5.07 1841.21 7.19 0.00 0.00 69153.89 9527.72 61061.65 00:21:33.235 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x0 length 0xbd0bd 00:21:33.235 nvme2n1 : 5.06 2940.26 11.49 0.00 0.00 43210.84 3921.63 68220.61 00:21:33.235 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:33.235 nvme2n1 : 5.07 2915.80 11.39 0.00 0.00 43559.16 3132.04 59377.20 00:21:33.235 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0x0 length 0xa0000 00:21:33.235 nvme3n1 : 5.05 1951.04 7.62 0.00 0.00 65099.34 7001.03 70326.18 00:21:33.235 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.235 Verification LBA range: start 0xa0000 length 0xa0000 00:21:33.235 nvme3n1 : 5.08 1865.97 7.29 0.00 0.00 67959.07 8738.13 66115.03 00:21:33.235 [2024-11-25T20:39:41.371Z] =================================================================================================================== 00:21:33.235 [2024-11-25T20:39:41.372Z] Total : 24747.72 96.67 0.00 0.00 61756.53 3132.04 71589.53 00:21:34.632 00:21:34.632 real 0m7.344s 00:21:34.632 user 0m10.937s 00:21:34.632 sys 0m2.318s 00:21:34.632 20:39:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.632 20:39:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:34.632 ************************************ 00:21:34.632 END TEST bdev_verify 00:21:34.632 ************************************ 00:21:34.632 20:39:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:34.632 20:39:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:34.632 20:39:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.632 20:39:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:34.632 ************************************ 00:21:34.632 START TEST bdev_verify_big_io 00:21:34.632 ************************************ 00:21:34.632 20:39:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:34.632 [2024-11-25 20:39:42.493858] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:34.632 [2024-11-25 20:39:42.493991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75026 ] 00:21:34.632 [2024-11-25 20:39:42.680026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:34.891 [2024-11-25 20:39:42.816415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.891 [2024-11-25 20:39:42.816415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.504 Running I/O for 5 seconds... 00:21:40.034 1728.00 IOPS, 108.00 MiB/s [2024-11-25T20:39:49.548Z] 3083.00 IOPS, 192.69 MiB/s [2024-11-25T20:39:49.548Z] 3804.00 IOPS, 237.75 MiB/s 00:21:41.412 Latency(us) 00:21:41.412 [2024-11-25T20:39:49.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.412 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x0 length 0x8000 00:21:41.412 nvme0n1 : 5.56 172.72 10.79 0.00 0.00 716036.18 7580.07 764744.58 00:21:41.412 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x8000 length 0x8000 00:21:41.412 nvme0n1 : 5.61 169.62 10.60 0.00 0.00 737856.49 69483.95 1111743.23 00:21:41.412 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x0 length 0x8000 00:21:41.412 nvme0n2 : 5.60 200.01 12.50 0.00 0.00 610733.10 93066.38 650201.34 00:21:41.412 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x8000 length 0x8000 00:21:41.412 nvme0n2 : 5.57 163.69 10.23 0.00 0.00 749648.32 8264.38 737793.23 00:21:41.412 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x0 length 0x8000 00:21:41.412 nvme0n3 : 5.60 153.50 9.59 0.00 0.00 780343.34 48217.65 1846167.54 00:21:41.412 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x8000 length 0x8000 00:21:41.412 nvme0n3 : 5.62 172.24 10.76 0.00 0.00 698781.60 44217.06 656939.18 00:21:41.412 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x0 length 0x2000 00:21:41.412 nvme1n1 : 5.61 179.66 11.23 0.00 0.00 652273.08 46533.19 1610343.22 00:21:41.412 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x2000 length 0x2000 00:21:41.412 nvme1n1 : 5.62 156.52 9.78 0.00 0.00 751901.06 44427.62 1832691.87 00:21:41.412 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x0 length 0xbd0b 00:21:41.412 nvme2n1 : 5.61 151.25 9.45 0.00 0.00 756287.96 13317.76 1670983.76 00:21:41.412 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:41.412 nvme2n1 : 5.63 181.84 11.36 0.00 0.00 633563.30 12791.36 1017413.50 00:21:41.412 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0x0 length 0xa000 00:21:41.412 nvme3n1 : 5.64 189.96 11.87 0.00 0.00 588812.73 1737.10 737793.23 00:21:41.412 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:41.412 Verification LBA range: start 0xa000 length 0xa000 00:21:41.412 nvme3n1 : 5.64 176.03 11.00 0.00 0.00 636236.92 5606.09 1495799.98 00:21:41.412 [2024-11-25T20:39:49.548Z] =================================================================================================================== 00:21:41.412 [2024-11-25T20:39:49.548Z] Total : 2067.03 129.19 0.00 0.00 687881.15 1737.10 1846167.54 00:21:42.790 00:21:42.790 real 0m8.282s 00:21:42.790 user 0m14.867s 00:21:42.790 sys 0m0.681s 00:21:42.790 ************************************ 00:21:42.790 END TEST bdev_verify_big_io 00:21:42.790 ************************************ 00:21:42.790 20:39:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.790 20:39:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 20:39:50 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:42.790 20:39:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:42.790 20:39:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.790 20:39:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:42.790 ************************************ 00:21:42.791 START TEST bdev_write_zeroes 00:21:42.791 ************************************ 00:21:42.791 20:39:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:42.791 [2024-11-25 20:39:50.852597] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:42.791 [2024-11-25 20:39:50.852748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75138 ] 00:21:43.050 [2024-11-25 20:39:51.036871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.050 [2024-11-25 20:39:51.172302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.618 Running I/O for 1 seconds... 00:21:44.998 61984.00 IOPS, 242.12 MiB/s 00:21:44.998 Latency(us) 00:21:44.998 [2024-11-25T20:39:53.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.998 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:44.998 nvme0n1 : 1.02 10072.19 39.34 0.00 0.00 12697.06 8474.94 21476.86 00:21:44.998 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:44.998 nvme0n2 : 1.02 10062.00 39.30 0.00 0.00 12701.74 8632.85 21792.69 00:21:44.998 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:44.998 nvme0n3 : 1.02 10052.84 39.27 0.00 0.00 12704.15 8580.22 22213.81 00:21:44.998 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:44.998 nvme1n1 : 1.02 10043.74 39.23 0.00 0.00 12706.86 8527.58 22634.92 00:21:44.998 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:44.998 nvme2n1 : 1.03 11383.32 44.47 0.00 0.00 11202.49 4737.54 19160.73 00:21:44.998 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:44.998 nvme3n1 : 1.02 10015.41 39.12 0.00 0.00 12646.54 6790.48 21687.42 00:21:44.998 [2024-11-25T20:39:53.134Z] =================================================================================================================== 00:21:44.998 [2024-11-25T20:39:53.134Z] Total : 61629.49 240.74 0.00 0.00 12414.73 4737.54 22634.92 00:21:45.937 00:21:45.937 real 0m3.214s 00:21:45.937 user 0m2.323s 00:21:45.937 sys 0m0.688s 00:21:45.937 20:39:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.937 20:39:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:45.937 ************************************ 00:21:45.937 END TEST bdev_write_zeroes 00:21:45.937 ************************************ 00:21:45.937 20:39:54 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:45.937 20:39:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:45.937 20:39:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.937 20:39:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:45.937 ************************************ 00:21:45.937 START TEST bdev_json_nonenclosed 00:21:45.937 ************************************ 00:21:45.937 20:39:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:46.196 [2024-11-25 20:39:54.145442] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:46.196 [2024-11-25 20:39:54.145578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75198 ] 00:21:46.456 [2024-11-25 20:39:54.330076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.456 [2024-11-25 20:39:54.460696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.456 [2024-11-25 20:39:54.460808] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:46.456 [2024-11-25 20:39:54.460834] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:46.456 [2024-11-25 20:39:54.460848] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:46.715 00:21:46.715 real 0m0.689s 00:21:46.715 user 0m0.411s 00:21:46.715 sys 0m0.173s 00:21:46.715 20:39:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.715 20:39:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:46.715 ************************************ 00:21:46.715 END TEST bdev_json_nonenclosed 00:21:46.715 ************************************ 00:21:46.715 20:39:54 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:46.715 20:39:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:46.715 20:39:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.715 20:39:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:46.715 ************************************ 00:21:46.715 START TEST bdev_json_nonarray 00:21:46.715 ************************************ 00:21:46.715 20:39:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:46.975 [2024-11-25 20:39:54.905449] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:46.975 [2024-11-25 20:39:54.905594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75229 ] 00:21:46.975 [2024-11-25 20:39:55.089554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.234 [2024-11-25 20:39:55.221087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.234 [2024-11-25 20:39:55.221196] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:47.234 [2024-11-25 20:39:55.221220] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:47.234 [2024-11-25 20:39:55.221233] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:47.493 00:21:47.493 real 0m0.689s 00:21:47.493 user 0m0.431s 00:21:47.493 sys 0m0.153s 00:21:47.493 20:39:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.493 20:39:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:47.493 ************************************ 00:21:47.493 END TEST bdev_json_nonarray 00:21:47.493 ************************************ 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:47.493 20:39:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:48.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.731 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.731 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.731 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.731 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.731 00:21:53.731 real 1m1.701s 00:21:53.731 user 1m34.744s 00:21:53.731 sys 0m39.130s 00:21:53.731 20:40:01 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.731 ************************************ 00:21:53.731 END TEST blockdev_xnvme 00:21:53.731 ************************************ 00:21:53.731 20:40:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.731 20:40:01 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:53.731 20:40:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.731 20:40:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.731 20:40:01 -- common/autotest_common.sh@10 -- # set +x 00:21:53.731 ************************************ 00:21:53.731 START TEST ublk 00:21:53.731 ************************************ 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:53.731 * Looking for test storage... 00:21:53.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.731 20:40:01 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.731 20:40:01 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.731 20:40:01 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.731 20:40:01 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.731 20:40:01 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.731 20:40:01 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:53.731 20:40:01 ublk -- scripts/common.sh@345 -- # : 1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.731 20:40:01 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.731 20:40:01 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@353 -- # local d=1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.731 20:40:01 ublk -- scripts/common.sh@355 -- # echo 1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.731 20:40:01 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@353 -- # local d=2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.731 20:40:01 ublk -- scripts/common.sh@355 -- # echo 2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.731 20:40:01 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.731 20:40:01 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.731 20:40:01 ublk -- scripts/common.sh@368 -- # return 0 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.731 --rc genhtml_branch_coverage=1 00:21:53.731 --rc genhtml_function_coverage=1 00:21:53.731 --rc genhtml_legend=1 00:21:53.731 --rc geninfo_all_blocks=1 00:21:53.731 --rc geninfo_unexecuted_blocks=1 00:21:53.731 00:21:53.731 ' 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.731 --rc genhtml_branch_coverage=1 00:21:53.731 --rc genhtml_function_coverage=1 00:21:53.731 --rc genhtml_legend=1 00:21:53.731 --rc geninfo_all_blocks=1 00:21:53.731 --rc geninfo_unexecuted_blocks=1 00:21:53.731 00:21:53.731 ' 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.731 --rc genhtml_branch_coverage=1 00:21:53.731 --rc genhtml_function_coverage=1 00:21:53.731 --rc genhtml_legend=1 00:21:53.731 --rc geninfo_all_blocks=1 00:21:53.731 --rc geninfo_unexecuted_blocks=1 00:21:53.731 00:21:53.731 ' 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.731 --rc genhtml_branch_coverage=1 00:21:53.731 --rc genhtml_function_coverage=1 00:21:53.731 --rc genhtml_legend=1 00:21:53.731 --rc geninfo_all_blocks=1 00:21:53.731 --rc geninfo_unexecuted_blocks=1 00:21:53.731 00:21:53.731 ' 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:53.731 20:40:01 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:53.731 20:40:01 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:53.731 20:40:01 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:53.731 20:40:01 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:53.731 20:40:01 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:53.731 20:40:01 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:53.731 20:40:01 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:53.731 20:40:01 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:53.731 20:40:01 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.731 20:40:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:53.731 ************************************ 00:21:53.731 START TEST test_save_ublk_config 00:21:53.731 ************************************ 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75525 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75525 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75525 ']' 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.731 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.732 20:40:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:53.732 [2024-11-25 20:40:01.687519] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:53.732 [2024-11-25 20:40:01.687652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75525 ] 00:21:53.995 [2024-11-25 20:40:01.864508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.995 [2024-11-25 20:40:02.000910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.932 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.932 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:54.932 20:40:03 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:54.932 20:40:03 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:54.932 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.932 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:54.932 [2024-11-25 20:40:03.057359] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:54.932 [2024-11-25 20:40:03.058705] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:55.191 malloc0 00:21:55.191 [2024-11-25 20:40:03.161507] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:55.191 [2024-11-25 20:40:03.161632] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:55.191 [2024-11-25 20:40:03.161647] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:55.191 [2024-11-25 20:40:03.161656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:55.191 [2024-11-25 20:40:03.170570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:55.191 [2024-11-25 20:40:03.170598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:55.191 [2024-11-25 20:40:03.177363] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:55.191 [2024-11-25 20:40:03.177479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:55.191 [2024-11-25 20:40:03.194356] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:55.191 0 00:21:55.191 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.191 20:40:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:55.191 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.191 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:55.450 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.450 20:40:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:55.450 "subsystems": [ 00:21:55.450 { 00:21:55.450 "subsystem": "fsdev", 00:21:55.450 "config": [ 00:21:55.450 { 00:21:55.450 "method": "fsdev_set_opts", 00:21:55.450 "params": { 00:21:55.450 "fsdev_io_pool_size": 65535, 00:21:55.450 "fsdev_io_cache_size": 256 00:21:55.450 } 00:21:55.450 } 00:21:55.450 ] 00:21:55.450 }, 00:21:55.450 { 00:21:55.450 "subsystem": "keyring", 00:21:55.450 "config": [] 00:21:55.450 }, 00:21:55.450 { 00:21:55.450 "subsystem": "iobuf", 00:21:55.450 "config": [ 00:21:55.450 { 00:21:55.450 "method": "iobuf_set_options", 00:21:55.450 "params": { 00:21:55.450 "small_pool_count": 8192, 00:21:55.450 "large_pool_count": 1024, 00:21:55.450 "small_bufsize": 8192, 00:21:55.450 "large_bufsize": 135168, 00:21:55.450 "enable_numa": false 00:21:55.450 } 00:21:55.450 } 00:21:55.450 ] 00:21:55.450 }, 00:21:55.450 { 00:21:55.450 "subsystem": "sock", 00:21:55.450 "config": [ 00:21:55.450 { 00:21:55.450 "method": "sock_set_default_impl", 00:21:55.450 "params": { 00:21:55.450 "impl_name": "posix" 00:21:55.450 } 00:21:55.450 }, 00:21:55.450 { 00:21:55.450 "method": "sock_impl_set_options", 00:21:55.450 "params": { 00:21:55.450 "impl_name": "ssl", 00:21:55.450 "recv_buf_size": 4096, 00:21:55.450 "send_buf_size": 4096, 00:21:55.450 "enable_recv_pipe": true, 00:21:55.451 "enable_quickack": false, 00:21:55.451 "enable_placement_id": 0, 00:21:55.451 "enable_zerocopy_send_server": true, 00:21:55.451 "enable_zerocopy_send_client": false, 00:21:55.451 "zerocopy_threshold": 0, 00:21:55.451 "tls_version": 0, 00:21:55.451 "enable_ktls": false 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "sock_impl_set_options", 00:21:55.451 "params": { 00:21:55.451 "impl_name": "posix", 00:21:55.451 "recv_buf_size": 2097152, 00:21:55.451 "send_buf_size": 2097152, 00:21:55.451 "enable_recv_pipe": true, 00:21:55.451 "enable_quickack": false, 00:21:55.451 "enable_placement_id": 0, 00:21:55.451 "enable_zerocopy_send_server": true, 00:21:55.451 "enable_zerocopy_send_client": false, 00:21:55.451 "zerocopy_threshold": 0, 00:21:55.451 "tls_version": 0, 00:21:55.451 "enable_ktls": false 00:21:55.451 } 00:21:55.451 } 00:21:55.451 ] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "vmd", 00:21:55.451 "config": [] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "accel", 00:21:55.451 "config": [ 00:21:55.451 { 00:21:55.451 "method": "accel_set_options", 00:21:55.451 "params": { 00:21:55.451 "small_cache_size": 128, 00:21:55.451 "large_cache_size": 16, 00:21:55.451 "task_count": 2048, 00:21:55.451 "sequence_count": 2048, 00:21:55.451 "buf_count": 2048 00:21:55.451 } 00:21:55.451 } 00:21:55.451 ] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "bdev", 00:21:55.451 "config": [ 00:21:55.451 { 00:21:55.451 "method": "bdev_set_options", 00:21:55.451 "params": { 00:21:55.451 "bdev_io_pool_size": 65535, 00:21:55.451 "bdev_io_cache_size": 256, 00:21:55.451 "bdev_auto_examine": true, 00:21:55.451 "iobuf_small_cache_size": 128, 00:21:55.451 "iobuf_large_cache_size": 16 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "bdev_raid_set_options", 00:21:55.451 "params": { 00:21:55.451 "process_window_size_kb": 1024, 00:21:55.451 "process_max_bandwidth_mb_sec": 0 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "bdev_iscsi_set_options", 00:21:55.451 "params": { 00:21:55.451 "timeout_sec": 30 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "bdev_nvme_set_options", 00:21:55.451 "params": { 00:21:55.451 "action_on_timeout": "none", 00:21:55.451 "timeout_us": 0, 00:21:55.451 "timeout_admin_us": 0, 00:21:55.451 "keep_alive_timeout_ms": 10000, 00:21:55.451 "arbitration_burst": 0, 00:21:55.451 "low_priority_weight": 0, 00:21:55.451 "medium_priority_weight": 0, 00:21:55.451 "high_priority_weight": 0, 00:21:55.451 "nvme_adminq_poll_period_us": 10000, 00:21:55.451 "nvme_ioq_poll_period_us": 0, 00:21:55.451 "io_queue_requests": 0, 00:21:55.451 "delay_cmd_submit": true, 00:21:55.451 "transport_retry_count": 4, 00:21:55.451 "bdev_retry_count": 3, 00:21:55.451 "transport_ack_timeout": 0, 00:21:55.451 "ctrlr_loss_timeout_sec": 0, 00:21:55.451 "reconnect_delay_sec": 0, 00:21:55.451 "fast_io_fail_timeout_sec": 0, 00:21:55.451 "disable_auto_failback": false, 00:21:55.451 "generate_uuids": false, 00:21:55.451 "transport_tos": 0, 00:21:55.451 "nvme_error_stat": false, 00:21:55.451 "rdma_srq_size": 0, 00:21:55.451 "io_path_stat": false, 00:21:55.451 "allow_accel_sequence": false, 00:21:55.451 "rdma_max_cq_size": 0, 00:21:55.451 "rdma_cm_event_timeout_ms": 0, 00:21:55.451 "dhchap_digests": [ 00:21:55.451 "sha256", 00:21:55.451 "sha384", 00:21:55.451 "sha512" 00:21:55.451 ], 00:21:55.451 "dhchap_dhgroups": [ 00:21:55.451 "null", 00:21:55.451 "ffdhe2048", 00:21:55.451 "ffdhe3072", 00:21:55.451 "ffdhe4096", 00:21:55.451 "ffdhe6144", 00:21:55.451 "ffdhe8192" 00:21:55.451 ] 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "bdev_nvme_set_hotplug", 00:21:55.451 "params": { 00:21:55.451 "period_us": 100000, 00:21:55.451 "enable": false 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "bdev_malloc_create", 00:21:55.451 "params": { 00:21:55.451 "name": "malloc0", 00:21:55.451 "num_blocks": 8192, 00:21:55.451 "block_size": 4096, 00:21:55.451 "physical_block_size": 4096, 00:21:55.451 "uuid": "dcd18c58-df4a-4cec-bcf4-3563fa94073d", 00:21:55.451 "optimal_io_boundary": 0, 00:21:55.451 "md_size": 0, 00:21:55.451 "dif_type": 0, 00:21:55.451 "dif_is_head_of_md": false, 00:21:55.451 "dif_pi_format": 0 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "bdev_wait_for_examine" 00:21:55.451 } 00:21:55.451 ] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "scsi", 00:21:55.451 "config": null 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "scheduler", 00:21:55.451 "config": [ 00:21:55.451 { 00:21:55.451 "method": "framework_set_scheduler", 00:21:55.451 "params": { 00:21:55.451 "name": "static" 00:21:55.451 } 00:21:55.451 } 00:21:55.451 ] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "vhost_scsi", 00:21:55.451 "config": [] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "vhost_blk", 00:21:55.451 "config": [] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "ublk", 00:21:55.451 "config": [ 00:21:55.451 { 00:21:55.451 "method": "ublk_create_target", 00:21:55.451 "params": { 00:21:55.451 "cpumask": "1" 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "ublk_start_disk", 00:21:55.451 "params": { 00:21:55.451 "bdev_name": "malloc0", 00:21:55.451 "ublk_id": 0, 00:21:55.451 "num_queues": 1, 00:21:55.451 "queue_depth": 128 00:21:55.451 } 00:21:55.451 } 00:21:55.451 ] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "nbd", 00:21:55.451 "config": [] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "nvmf", 00:21:55.451 "config": [ 00:21:55.451 { 00:21:55.451 "method": "nvmf_set_config", 00:21:55.451 "params": { 00:21:55.451 "discovery_filter": "match_any", 00:21:55.451 "admin_cmd_passthru": { 00:21:55.451 "identify_ctrlr": false 00:21:55.451 }, 00:21:55.451 "dhchap_digests": [ 00:21:55.451 "sha256", 00:21:55.451 "sha384", 00:21:55.451 "sha512" 00:21:55.451 ], 00:21:55.451 "dhchap_dhgroups": [ 00:21:55.451 "null", 00:21:55.451 "ffdhe2048", 00:21:55.451 "ffdhe3072", 00:21:55.451 "ffdhe4096", 00:21:55.451 "ffdhe6144", 00:21:55.451 "ffdhe8192" 00:21:55.451 ] 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "nvmf_set_max_subsystems", 00:21:55.451 "params": { 00:21:55.451 "max_subsystems": 1024 00:21:55.451 } 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "method": "nvmf_set_crdt", 00:21:55.451 "params": { 00:21:55.451 "crdt1": 0, 00:21:55.451 "crdt2": 0, 00:21:55.451 "crdt3": 0 00:21:55.451 } 00:21:55.451 } 00:21:55.451 ] 00:21:55.451 }, 00:21:55.451 { 00:21:55.451 "subsystem": "iscsi", 00:21:55.451 "config": [ 00:21:55.451 { 00:21:55.451 "method": "iscsi_set_options", 00:21:55.451 "params": { 00:21:55.451 "node_base": "iqn.2016-06.io.spdk", 00:21:55.451 "max_sessions": 128, 00:21:55.451 "max_connections_per_session": 2, 00:21:55.451 "max_queue_depth": 64, 00:21:55.452 "default_time2wait": 2, 00:21:55.452 "default_time2retain": 20, 00:21:55.452 "first_burst_length": 8192, 00:21:55.452 "immediate_data": true, 00:21:55.452 "allow_duplicated_isid": false, 00:21:55.452 "error_recovery_level": 0, 00:21:55.452 "nop_timeout": 60, 00:21:55.452 "nop_in_interval": 30, 00:21:55.452 "disable_chap": false, 00:21:55.452 "require_chap": false, 00:21:55.452 "mutual_chap": false, 00:21:55.452 "chap_group": 0, 00:21:55.452 "max_large_datain_per_connection": 64, 00:21:55.452 "max_r2t_per_connection": 4, 00:21:55.452 "pdu_pool_size": 36864, 00:21:55.452 "immediate_data_pool_size": 16384, 00:21:55.452 "data_out_pool_size": 2048 00:21:55.452 } 00:21:55.452 } 00:21:55.452 ] 00:21:55.452 } 00:21:55.452 ] 00:21:55.452 }' 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75525 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75525 ']' 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75525 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75525 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.452 killing process with pid 75525 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75525' 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75525 00:21:55.452 20:40:03 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75525 00:21:57.357 [2024-11-25 20:40:05.123518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:57.357 [2024-11-25 20:40:05.157372] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:57.357 [2024-11-25 20:40:05.157521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:57.357 [2024-11-25 20:40:05.166355] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:57.357 [2024-11-25 20:40:05.166425] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:57.357 [2024-11-25 20:40:05.166444] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:57.357 [2024-11-25 20:40:05.166472] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:57.357 [2024-11-25 20:40:05.166642] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75603 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75603 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75603 ']' 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:59.265 20:40:07 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:59.265 "subsystems": [ 00:21:59.265 { 00:21:59.265 "subsystem": "fsdev", 00:21:59.265 "config": [ 00:21:59.265 { 00:21:59.265 "method": "fsdev_set_opts", 00:21:59.265 "params": { 00:21:59.265 "fsdev_io_pool_size": 65535, 00:21:59.265 "fsdev_io_cache_size": 256 00:21:59.265 } 00:21:59.265 } 00:21:59.265 ] 00:21:59.265 }, 00:21:59.265 { 00:21:59.265 "subsystem": "keyring", 00:21:59.265 "config": [] 00:21:59.265 }, 00:21:59.265 { 00:21:59.265 "subsystem": "iobuf", 00:21:59.265 "config": [ 00:21:59.265 { 00:21:59.265 "method": "iobuf_set_options", 00:21:59.265 "params": { 00:21:59.265 "small_pool_count": 8192, 00:21:59.265 "large_pool_count": 1024, 00:21:59.265 "small_bufsize": 8192, 00:21:59.265 "large_bufsize": 135168, 00:21:59.265 "enable_numa": false 00:21:59.265 } 00:21:59.265 } 00:21:59.265 ] 00:21:59.265 }, 00:21:59.265 { 00:21:59.265 "subsystem": "sock", 00:21:59.265 "config": [ 00:21:59.265 { 00:21:59.265 "method": "sock_set_default_impl", 00:21:59.265 "params": { 00:21:59.265 "impl_name": "posix" 00:21:59.265 } 00:21:59.265 }, 00:21:59.265 { 00:21:59.265 "method": "sock_impl_set_options", 00:21:59.265 "params": { 00:21:59.265 "impl_name": "ssl", 00:21:59.265 "recv_buf_size": 4096, 00:21:59.265 "send_buf_size": 4096, 00:21:59.265 "enable_recv_pipe": true, 00:21:59.265 "enable_quickack": false, 00:21:59.266 "enable_placement_id": 0, 00:21:59.266 "enable_zerocopy_send_server": true, 00:21:59.266 "enable_zerocopy_send_client": false, 00:21:59.266 "zerocopy_threshold": 0, 00:21:59.266 "tls_version": 0, 00:21:59.266 "enable_ktls": false 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "sock_impl_set_options", 00:21:59.266 "params": { 00:21:59.266 "impl_name": "posix", 00:21:59.266 "recv_buf_size": 2097152, 00:21:59.266 "send_buf_size": 2097152, 00:21:59.266 "enable_recv_pipe": true, 00:21:59.266 "enable_quickack": false, 00:21:59.266 "enable_placement_id": 0, 00:21:59.266 "enable_zerocopy_send_server": true, 00:21:59.266 "enable_zerocopy_send_client": false, 00:21:59.266 "zerocopy_threshold": 0, 00:21:59.266 "tls_version": 0, 00:21:59.266 "enable_ktls": false 00:21:59.266 } 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "vmd", 00:21:59.266 "config": [] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "accel", 00:21:59.266 "config": [ 00:21:59.266 { 00:21:59.266 "method": "accel_set_options", 00:21:59.266 "params": { 00:21:59.266 "small_cache_size": 128, 00:21:59.266 "large_cache_size": 16, 00:21:59.266 "task_count": 2048, 00:21:59.266 "sequence_count": 2048, 00:21:59.266 "buf_count": 2048 00:21:59.266 } 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "bdev", 00:21:59.266 "config": [ 00:21:59.266 { 00:21:59.266 "method": "bdev_set_options", 00:21:59.266 "params": { 00:21:59.266 "bdev_io_pool_size": 65535, 00:21:59.266 "bdev_io_cache_size": 256, 00:21:59.266 "bdev_auto_examine": true, 00:21:59.266 "iobuf_small_cache_size": 128, 00:21:59.266 "iobuf_large_cache_size": 16 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "bdev_raid_set_options", 00:21:59.266 "params": { 00:21:59.266 "process_window_size_kb": 1024, 00:21:59.266 "process_max_bandwidth_mb_sec": 0 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "bdev_iscsi_set_options", 00:21:59.266 "params": { 00:21:59.266 "timeout_sec": 30 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "bdev_nvme_set_options", 00:21:59.266 "params": { 00:21:59.266 "action_on_timeout": "none", 00:21:59.266 "timeout_us": 0, 00:21:59.266 "timeout_admin_us": 0, 00:21:59.266 "keep_alive_timeout_ms": 10000, 00:21:59.266 "arbitration_burst": 0, 00:21:59.266 "low_priority_weight": 0, 00:21:59.266 "medium_priority_weight": 0, 00:21:59.266 "high_priority_weight": 0, 00:21:59.266 "nvme_adminq_poll_period_us": 10000, 00:21:59.266 "nvme_ioq_poll_period_us": 0, 00:21:59.266 "io_queue_requests": 0, 00:21:59.266 "delay_cmd_submit": true, 00:21:59.266 "transport_retry_count": 4, 00:21:59.266 "bdev_retry_count": 3, 00:21:59.266 "transport_ack_timeout": 0, 00:21:59.266 "ctrlr_loss_timeout_sec": 0, 00:21:59.266 "reconnect_delay_sec": 0, 00:21:59.266 "fast_io_fail_timeout_sec": 0, 00:21:59.266 "disable_auto_failback": false, 00:21:59.266 "generate_uuids": false, 00:21:59.266 "transport_tos": 0, 00:21:59.266 "nvme_error_stat": false, 00:21:59.266 "rdma_srq_size": 0, 00:21:59.266 "io_path_stat": false, 00:21:59.266 "allow_accel_sequence": false, 00:21:59.266 "rdma_max_cq_size": 0, 00:21:59.266 "rdma_cm_event_timeout_ms": 0, 00:21:59.266 "dhchap_digests": [ 00:21:59.266 "sha256", 00:21:59.266 "sha384", 00:21:59.266 "sha512" 00:21:59.266 ], 00:21:59.266 "dhchap_dhgroups": [ 00:21:59.266 "null", 00:21:59.266 "ffdhe2048", 00:21:59.266 "ffdhe3072", 00:21:59.266 "ffdhe4096", 00:21:59.266 "ffdhe6144", 00:21:59.266 "ffdhe8192" 00:21:59.266 ] 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "bdev_nvme_set_hotplug", 00:21:59.266 "params": { 00:21:59.266 "period_us": 100000, 00:21:59.266 "enable": false 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "bdev_malloc_create", 00:21:59.266 "params": { 00:21:59.266 "name": "malloc0", 00:21:59.266 "num_blocks": 8192, 00:21:59.266 "block_size": 4096, 00:21:59.266 "physical_block_size": 4096, 00:21:59.266 "uuid": "dcd18c58-df4a-4cec-bcf4-3563fa94073d", 00:21:59.266 "optimal_io_boundary": 0, 00:21:59.266 "md_size": 0, 00:21:59.266 "dif_type": 0, 00:21:59.266 "dif_is_head_of_md": false, 00:21:59.266 "dif_pi_format": 0 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "bdev_wait_for_examine" 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "scsi", 00:21:59.266 "config": null 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "scheduler", 00:21:59.266 "config": [ 00:21:59.266 { 00:21:59.266 "method": "framework_set_scheduler", 00:21:59.266 "params": { 00:21:59.266 "name": "static" 00:21:59.266 } 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "vhost_scsi", 00:21:59.266 "config": [] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "vhost_blk", 00:21:59.266 "config": [] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "ublk", 00:21:59.266 "config": [ 00:21:59.266 { 00:21:59.266 "method": "ublk_create_target", 00:21:59.266 "params": { 00:21:59.266 "cpumask": "1" 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "ublk_start_disk", 00:21:59.266 "params": { 00:21:59.266 "bdev_name": "malloc0", 00:21:59.266 "ublk_id": 0, 00:21:59.266 "num_queues": 1, 00:21:59.266 "queue_depth": 128 00:21:59.266 } 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "nbd", 00:21:59.266 "config": [] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "nvmf", 00:21:59.266 "config": [ 00:21:59.266 { 00:21:59.266 "method": "nvmf_set_config", 00:21:59.266 "params": { 00:21:59.266 "discovery_filter": "match_any", 00:21:59.266 "admin_cmd_passthru": { 00:21:59.266 "identify_ctrlr": false 00:21:59.266 }, 00:21:59.266 "dhchap_digests": [ 00:21:59.266 "sha256", 00:21:59.266 "sha384", 00:21:59.266 "sha512" 00:21:59.266 ], 00:21:59.266 "dhchap_dhgroups": [ 00:21:59.266 "null", 00:21:59.266 "ffdhe2048", 00:21:59.266 "ffdhe3072", 00:21:59.266 "ffdhe4096", 00:21:59.266 "ffdhe6144", 00:21:59.266 "ffdhe8192" 00:21:59.266 ] 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "nvmf_set_max_subsystems", 00:21:59.266 "params": { 00:21:59.266 "max_subsystems": 1024 00:21:59.266 } 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "method": "nvmf_set_crdt", 00:21:59.266 "params": { 00:21:59.266 "crdt1": 0, 00:21:59.266 "crdt2": 0, 00:21:59.266 "crdt3": 0 00:21:59.266 } 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }, 00:21:59.266 { 00:21:59.266 "subsystem": "iscsi", 00:21:59.266 "config": [ 00:21:59.266 { 00:21:59.266 "method": "iscsi_set_options", 00:21:59.266 "params": { 00:21:59.266 "node_base": "iqn.2016-06.io.spdk", 00:21:59.266 "max_sessions": 128, 00:21:59.266 "max_connections_per_session": 2, 00:21:59.266 "max_queue_depth": 64, 00:21:59.266 "default_time2wait": 2, 00:21:59.266 "default_time2retain": 20, 00:21:59.266 "first_burst_length": 8192, 00:21:59.266 "immediate_data": true, 00:21:59.266 "allow_duplicated_isid": false, 00:21:59.266 "error_recovery_level": 0, 00:21:59.266 "nop_timeout": 60, 00:21:59.266 "nop_in_interval": 30, 00:21:59.266 "disable_chap": false, 00:21:59.266 "require_chap": false, 00:21:59.266 "mutual_chap": false, 00:21:59.266 "chap_group": 0, 00:21:59.266 "max_large_datain_per_connection": 64, 00:21:59.266 "max_r2t_per_connection": 4, 00:21:59.266 "pdu_pool_size": 36864, 00:21:59.266 "immediate_data_pool_size": 16384, 00:21:59.266 "data_out_pool_size": 2048 00:21:59.266 } 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 } 00:21:59.266 ] 00:21:59.266 }' 00:21:59.266 [2024-11-25 20:40:07.276309] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:21:59.266 [2024-11-25 20:40:07.276479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75603 ] 00:21:59.526 [2024-11-25 20:40:07.453134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.526 [2024-11-25 20:40:07.583065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.905 [2024-11-25 20:40:08.737365] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:00.905 [2024-11-25 20:40:08.738635] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:00.905 [2024-11-25 20:40:08.745513] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:00.905 [2024-11-25 20:40:08.745616] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:00.905 [2024-11-25 20:40:08.745631] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:00.905 [2024-11-25 20:40:08.745640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:00.905 [2024-11-25 20:40:08.754450] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:00.905 [2024-11-25 20:40:08.754475] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:00.905 [2024-11-25 20:40:08.761357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:00.905 [2024-11-25 20:40:08.761463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:00.906 [2024-11-25 20:40:08.778352] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75603 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75603 ']' 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75603 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75603 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.906 killing process with pid 75603 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75603' 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75603 00:22:00.906 20:40:08 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75603 00:22:02.812 [2024-11-25 20:40:10.592428] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:02.812 [2024-11-25 20:40:10.625377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:02.812 [2024-11-25 20:40:10.625525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:02.812 [2024-11-25 20:40:10.635344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:02.812 [2024-11-25 20:40:10.635419] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:02.812 [2024-11-25 20:40:10.635430] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:02.812 [2024-11-25 20:40:10.635460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:02.812 [2024-11-25 20:40:10.635625] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:04.720 20:40:12 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:22:04.720 00:22:04.720 real 0m11.046s 00:22:04.720 user 0m8.356s 00:22:04.720 sys 0m3.469s 00:22:04.720 20:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.720 20:40:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:04.720 ************************************ 00:22:04.720 END TEST test_save_ublk_config 00:22:04.720 ************************************ 00:22:04.720 20:40:12 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75694 00:22:04.720 20:40:12 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:04.720 20:40:12 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:04.720 20:40:12 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75694 00:22:04.720 20:40:12 ublk -- common/autotest_common.sh@835 -- # '[' -z 75694 ']' 00:22:04.720 20:40:12 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.720 20:40:12 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.720 20:40:12 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.720 20:40:12 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.720 20:40:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:04.720 [2024-11-25 20:40:12.789952] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:22:04.720 [2024-11-25 20:40:12.790657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75694 ] 00:22:04.979 [2024-11-25 20:40:12.967636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:04.979 [2024-11-25 20:40:13.107083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.979 [2024-11-25 20:40:13.107119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.359 20:40:14 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.359 20:40:14 ublk -- common/autotest_common.sh@868 -- # return 0 00:22:06.359 20:40:14 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:22:06.359 20:40:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:06.359 20:40:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.359 20:40:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:06.359 ************************************ 00:22:06.359 START TEST test_create_ublk 00:22:06.359 ************************************ 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:22:06.359 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:06.359 [2024-11-25 20:40:14.125352] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:06.359 [2024-11-25 20:40:14.128662] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.359 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:22:06.359 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.359 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:06.359 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.359 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:06.359 [2024-11-25 20:40:14.471527] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:06.359 [2024-11-25 20:40:14.472027] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:06.359 [2024-11-25 20:40:14.472051] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:06.359 [2024-11-25 20:40:14.472061] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:06.359 [2024-11-25 20:40:14.480809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:06.359 [2024-11-25 20:40:14.480837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:06.359 [2024-11-25 20:40:14.487360] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:06.618 [2024-11-25 20:40:14.500404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:06.618 [2024-11-25 20:40:14.513464] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:06.618 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:06.618 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.618 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:06.618 20:40:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:06.618 { 00:22:06.618 "ublk_device": "/dev/ublkb0", 00:22:06.618 "id": 0, 00:22:06.618 "queue_depth": 512, 00:22:06.618 "num_queues": 4, 00:22:06.618 "bdev_name": "Malloc0" 00:22:06.618 } 00:22:06.618 ]' 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:06.618 20:40:14 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:06.619 20:40:14 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:06.619 20:40:14 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:06.878 fio: verification read phase will never start because write phase uses all of runtime 00:22:06.878 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:06.878 fio-3.35 00:22:06.878 Starting 1 process 00:22:16.862 00:22:16.862 fio_test: (groupid=0, jobs=1): err= 0: pid=75747: Mon Nov 25 20:40:24 2024 00:22:16.862 write: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(555MiB/10001msec); 0 zone resets 00:22:16.862 clat (usec): min=48, max=8943, avg=69.60, stdev=131.55 00:22:16.862 lat (usec): min=48, max=8972, avg=70.08, stdev=131.59 00:22:16.862 clat percentiles (usec): 00:22:16.862 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 59], 20.00th=[ 60], 00:22:16.862 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 62], 60.00th=[ 63], 00:22:16.862 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 76], 00:22:16.862 | 99.00th=[ 91], 99.50th=[ 104], 99.90th=[ 2900], 99.95th=[ 3425], 00:22:16.862 | 99.99th=[ 3785] 00:22:16.862 bw ( KiB/s): min=19576, max=60032, per=99.89%, avg=56728.42, stdev=9297.16, samples=19 00:22:16.862 iops : min= 4894, max=15008, avg=14182.00, stdev=2324.36, samples=19 00:22:16.862 lat (usec) : 50=0.01%, 100=99.41%, 250=0.32%, 500=0.01%, 750=0.01% 00:22:16.862 lat (usec) : 1000=0.01% 00:22:16.862 lat (msec) : 2=0.07%, 4=0.16%, 10=0.01% 00:22:16.862 cpu : usr=2.61%, sys=9.17%, ctx=141994, majf=0, minf=797 00:22:16.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:16.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.862 issued rwts: total=0,141990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:16.862 00:22:16.862 Run status group 0 (all jobs): 00:22:16.862 WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=555MiB (582MB), run=10001-10001msec 00:22:16.862 00:22:16.862 Disk stats (read/write): 00:22:16.862 ublkb0: ios=0/140448, merge=0/0, ticks=0/8780, in_queue=8781, util=99.15% 00:22:17.122 20:40:25 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:17.122 [2024-11-25 20:40:25.008079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:17.122 [2024-11-25 20:40:25.042398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:17.122 [2024-11-25 20:40:25.043142] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:17.122 [2024-11-25 20:40:25.053403] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:17.122 [2024-11-25 20:40:25.053840] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:17.122 [2024-11-25 20:40:25.057347] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.122 20:40:25 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:17.122 [2024-11-25 20:40:25.075450] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:17.122 request: 00:22:17.122 { 00:22:17.122 "ublk_id": 0, 00:22:17.122 "method": "ublk_stop_disk", 00:22:17.122 "req_id": 1 00:22:17.122 } 00:22:17.122 Got JSON-RPC error response 00:22:17.122 response: 00:22:17.122 { 00:22:17.122 "code": -19, 00:22:17.122 "message": "No such device" 00:22:17.122 } 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.122 20:40:25 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:17.122 [2024-11-25 20:40:25.098477] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:17.122 [2024-11-25 20:40:25.106347] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:17.122 [2024-11-25 20:40:25.106398] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.122 20:40:25 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.122 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.061 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.061 20:40:25 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:18.061 20:40:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:18.061 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.061 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.061 20:40:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.061 20:40:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:18.061 20:40:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:18.061 20:40:26 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:18.061 20:40:26 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:18.061 20:40:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.061 20:40:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.061 20:40:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.061 20:40:26 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:18.061 20:40:26 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:18.061 ************************************ 00:22:18.061 END TEST test_create_ublk 00:22:18.061 ************************************ 00:22:18.061 20:40:26 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:18.061 00:22:18.061 real 0m11.948s 00:22:18.061 user 0m0.640s 00:22:18.061 sys 0m1.053s 00:22:18.061 20:40:26 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.061 20:40:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.061 20:40:26 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:18.061 20:40:26 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:18.061 20:40:26 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.061 20:40:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.061 ************************************ 00:22:18.061 START TEST test_create_multi_ublk 00:22:18.061 ************************************ 00:22:18.061 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:18.061 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:18.061 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.061 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.061 [2024-11-25 20:40:26.154347] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:18.061 [2024-11-25 20:40:26.157417] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:18.061 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.062 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:18.062 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:18.062 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:18.062 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:18.062 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.062 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.630 [2024-11-25 20:40:26.492530] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:18.630 [2024-11-25 20:40:26.493056] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:18.630 [2024-11-25 20:40:26.493075] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:18.630 [2024-11-25 20:40:26.493091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:18.630 [2024-11-25 20:40:26.501778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:18.630 [2024-11-25 20:40:26.501811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:18.630 [2024-11-25 20:40:26.508351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:18.630 [2024-11-25 20:40:26.508977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:18.630 [2024-11-25 20:40:26.531363] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.630 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:18.890 [2024-11-25 20:40:26.903515] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:18.890 [2024-11-25 20:40:26.904016] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:18.890 [2024-11-25 20:40:26.904039] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:18.890 [2024-11-25 20:40:26.904049] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:18.890 [2024-11-25 20:40:26.911391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:18.890 [2024-11-25 20:40:26.911415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:18.890 [2024-11-25 20:40:26.919372] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:18.890 [2024-11-25 20:40:26.920004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:18.890 [2024-11-25 20:40:26.936353] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.890 20:40:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:19.149 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.409 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:19.409 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:19.409 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.409 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:19.409 [2024-11-25 20:40:27.292490] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:19.409 [2024-11-25 20:40:27.293015] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:19.409 [2024-11-25 20:40:27.293040] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:19.409 [2024-11-25 20:40:27.293053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:19.409 [2024-11-25 20:40:27.301808] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:19.409 [2024-11-25 20:40:27.301837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:19.409 [2024-11-25 20:40:27.308360] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:19.409 [2024-11-25 20:40:27.309011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:19.409 [2024-11-25 20:40:27.313904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:19.410 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.410 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:19.410 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:19.410 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:19.410 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.410 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:19.668 [2024-11-25 20:40:27.667536] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:19.668 [2024-11-25 20:40:27.668039] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:19.668 [2024-11-25 20:40:27.668061] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:19.668 [2024-11-25 20:40:27.668071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:19.668 [2024-11-25 20:40:27.675377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:19.668 [2024-11-25 20:40:27.675405] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:19.668 [2024-11-25 20:40:27.683369] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:19.668 [2024-11-25 20:40:27.684019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:19.668 [2024-11-25 20:40:27.698415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:19.668 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:19.669 { 00:22:19.669 "ublk_device": "/dev/ublkb0", 00:22:19.669 "id": 0, 00:22:19.669 "queue_depth": 512, 00:22:19.669 "num_queues": 4, 00:22:19.669 "bdev_name": "Malloc0" 00:22:19.669 }, 00:22:19.669 { 00:22:19.669 "ublk_device": "/dev/ublkb1", 00:22:19.669 "id": 1, 00:22:19.669 "queue_depth": 512, 00:22:19.669 "num_queues": 4, 00:22:19.669 "bdev_name": "Malloc1" 00:22:19.669 }, 00:22:19.669 { 00:22:19.669 "ublk_device": "/dev/ublkb2", 00:22:19.669 "id": 2, 00:22:19.669 "queue_depth": 512, 00:22:19.669 "num_queues": 4, 00:22:19.669 "bdev_name": "Malloc2" 00:22:19.669 }, 00:22:19.669 { 00:22:19.669 "ublk_device": "/dev/ublkb3", 00:22:19.669 "id": 3, 00:22:19.669 "queue_depth": 512, 00:22:19.669 "num_queues": 4, 00:22:19.669 "bdev_name": "Malloc3" 00:22:19.669 } 00:22:19.669 ]' 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:19.669 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:19.928 20:40:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:19.928 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:19.928 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:20.187 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:20.446 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.447 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:20.447 [2024-11-25 20:40:28.576516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:20.706 [2024-11-25 20:40:28.617897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:20.706 [2024-11-25 20:40:28.618925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:20.706 [2024-11-25 20:40:28.621001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:20.706 [2024-11-25 20:40:28.621309] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:20.706 [2024-11-25 20:40:28.621334] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:20.706 [2024-11-25 20:40:28.638478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:20.706 [2024-11-25 20:40:28.673406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:20.706 [2024-11-25 20:40:28.674287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:20.706 [2024-11-25 20:40:28.686359] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:20.706 [2024-11-25 20:40:28.686672] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:20.706 [2024-11-25 20:40:28.686692] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:20.706 [2024-11-25 20:40:28.697500] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:20.706 [2024-11-25 20:40:28.736390] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:20.706 [2024-11-25 20:40:28.737187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:20.706 [2024-11-25 20:40:28.744368] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:20.706 [2024-11-25 20:40:28.744661] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:20.706 [2024-11-25 20:40:28.744678] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:20.706 [2024-11-25 20:40:28.758465] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:20.706 [2024-11-25 20:40:28.805401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:20.706 [2024-11-25 20:40:28.806180] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:20.706 [2024-11-25 20:40:28.812354] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:20.706 [2024-11-25 20:40:28.812713] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:20.706 [2024-11-25 20:40:28.812735] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.706 20:40:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:20.965 [2024-11-25 20:40:29.012484] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:20.965 [2024-11-25 20:40:29.020354] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:20.965 [2024-11-25 20:40:29.020413] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:20.965 20:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:20.965 20:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:20.965 20:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:20.965 20:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.965 20:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:21.922 20:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.922 20:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:21.923 20:40:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:21.923 20:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.923 20:40:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:22.491 20:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.491 20:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:22.491 20:40:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:22.491 20:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.491 20:40:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:23.059 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.059 20:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:23.059 20:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:23.059 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.059 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:23.317 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:23.578 ************************************ 00:22:23.578 END TEST test_create_multi_ublk 00:22:23.578 ************************************ 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:23.578 00:22:23.578 real 0m5.363s 00:22:23.578 user 0m0.984s 00:22:23.578 sys 0m0.252s 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.578 20:40:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:23.578 20:40:31 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:23.578 20:40:31 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:23.578 20:40:31 ublk -- ublk/ublk.sh@130 -- # killprocess 75694 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@954 -- # '[' -z 75694 ']' 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@958 -- # kill -0 75694 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@959 -- # uname 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75694 00:22:23.578 killing process with pid 75694 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75694' 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@973 -- # kill 75694 00:22:23.578 20:40:31 ublk -- common/autotest_common.sh@978 -- # wait 75694 00:22:24.955 [2024-11-25 20:40:32.907612] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:24.955 [2024-11-25 20:40:32.907687] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:26.348 00:22:26.348 real 0m32.996s 00:22:26.348 user 0m46.599s 00:22:26.348 sys 0m11.307s 00:22:26.348 20:40:34 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.348 20:40:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:26.348 ************************************ 00:22:26.348 END TEST ublk 00:22:26.348 ************************************ 00:22:26.348 20:40:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:26.348 20:40:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:26.348 20:40:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.348 20:40:34 -- common/autotest_common.sh@10 -- # set +x 00:22:26.348 ************************************ 00:22:26.348 START TEST ublk_recovery 00:22:26.348 ************************************ 00:22:26.348 20:40:34 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:26.348 * Looking for test storage... 00:22:26.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.606 20:40:34 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.606 --rc genhtml_branch_coverage=1 00:22:26.606 --rc genhtml_function_coverage=1 00:22:26.606 --rc genhtml_legend=1 00:22:26.606 --rc geninfo_all_blocks=1 00:22:26.606 --rc geninfo_unexecuted_blocks=1 00:22:26.606 00:22:26.606 ' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.606 --rc genhtml_branch_coverage=1 00:22:26.606 --rc genhtml_function_coverage=1 00:22:26.606 --rc genhtml_legend=1 00:22:26.606 --rc geninfo_all_blocks=1 00:22:26.606 --rc geninfo_unexecuted_blocks=1 00:22:26.606 00:22:26.606 ' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.606 --rc genhtml_branch_coverage=1 00:22:26.606 --rc genhtml_function_coverage=1 00:22:26.606 --rc genhtml_legend=1 00:22:26.606 --rc geninfo_all_blocks=1 00:22:26.606 --rc geninfo_unexecuted_blocks=1 00:22:26.606 00:22:26.606 ' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.606 --rc genhtml_branch_coverage=1 00:22:26.606 --rc genhtml_function_coverage=1 00:22:26.606 --rc genhtml_legend=1 00:22:26.606 --rc geninfo_all_blocks=1 00:22:26.606 --rc geninfo_unexecuted_blocks=1 00:22:26.606 00:22:26.606 ' 00:22:26.606 20:40:34 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:26.606 20:40:34 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:26.606 20:40:34 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:26.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.606 20:40:34 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76133 00:22:26.606 20:40:34 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.606 20:40:34 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:26.606 20:40:34 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76133 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76133 ']' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.606 20:40:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.606 [2024-11-25 20:40:34.722962] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:22:26.606 [2024-11-25 20:40:34.723320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76133 ] 00:22:26.864 [2024-11-25 20:40:34.919699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:27.122 [2024-11-25 20:40:35.059191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.122 [2024-11-25 20:40:35.059239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:28.076 20:40:36 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.076 [2024-11-25 20:40:36.090351] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:28.076 [2024-11-25 20:40:36.093307] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.076 20:40:36 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.076 20:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.333 malloc0 00:22:28.333 20:40:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.333 20:40:36 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:28.333 20:40:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.333 20:40:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.333 [2024-11-25 20:40:36.271529] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:28.333 [2024-11-25 20:40:36.271660] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:28.333 [2024-11-25 20:40:36.271677] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:28.333 [2024-11-25 20:40:36.271690] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:28.333 [2024-11-25 20:40:36.279383] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:28.333 [2024-11-25 20:40:36.279408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:28.333 [2024-11-25 20:40:36.287363] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:28.333 [2024-11-25 20:40:36.287533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:28.333 [2024-11-25 20:40:36.318379] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:28.333 1 00:22:28.333 20:40:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.333 20:40:36 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:29.267 20:40:37 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76174 00:22:29.267 20:40:37 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:29.267 20:40:37 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:29.526 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:29.526 fio-3.35 00:22:29.526 Starting 1 process 00:22:34.833 20:40:42 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76133 00:22:34.833 20:40:42 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:40.112 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76133 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:40.112 20:40:47 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76285 00:22:40.112 20:40:47 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:40.112 20:40:47 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:40.112 20:40:47 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76285 00:22:40.112 20:40:47 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76285 ']' 00:22:40.112 20:40:47 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.112 20:40:47 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.112 20:40:47 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.112 20:40:47 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.112 20:40:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.112 [2024-11-25 20:40:47.471233] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:22:40.112 [2024-11-25 20:40:47.471620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76285 ] 00:22:40.112 [2024-11-25 20:40:47.658499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:40.112 [2024-11-25 20:40:47.807353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.112 [2024-11-25 20:40:47.807418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:40.702 20:40:48 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.702 [2024-11-25 20:40:48.830360] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:40.702 [2024-11-25 20:40:48.833349] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.702 20:40:48 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.702 20:40:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.962 malloc0 00:22:40.962 20:40:49 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.962 20:40:49 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:40.962 20:40:49 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.962 20:40:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.962 [2024-11-25 20:40:49.014553] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:40.962 [2024-11-25 20:40:49.014608] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:40.962 [2024-11-25 20:40:49.014622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:40.962 [2024-11-25 20:40:49.022387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:40.962 [2024-11-25 20:40:49.022419] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:22:40.962 [2024-11-25 20:40:49.022431] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:40.962 [2024-11-25 20:40:49.022540] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:40.962 1 00:22:40.962 20:40:49 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.962 20:40:49 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76174 00:22:40.962 [2024-11-25 20:40:49.030365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:40.962 [2024-11-25 20:40:49.037113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:40.962 [2024-11-25 20:40:49.044618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:40.962 [2024-11-25 20:40:49.044645] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:37.221 00:23:37.221 fio_test: (groupid=0, jobs=1): err= 0: pid=76177: Mon Nov 25 20:41:37 2024 00:23:37.221 read: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(4663MiB/60002msec) 00:23:37.221 slat (nsec): min=1798, max=3321.9k, avg=8206.54, stdev=4599.96 00:23:37.221 clat (usec): min=956, max=6715.5k, avg=3138.53, stdev=47582.18 00:23:37.221 lat (usec): min=960, max=6715.5k, avg=3146.74, stdev=47582.19 00:23:37.221 clat percentiles (usec): 00:23:37.221 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:23:37.221 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2573], 00:23:37.221 | 70.00th=[ 3064], 80.00th=[ 3359], 90.00th=[ 3556], 95.00th=[ 4015], 00:23:37.221 | 99.00th=[ 5276], 99.50th=[ 5866], 99.90th=[ 7439], 99.95th=[ 8225], 00:23:37.221 | 99.99th=[13304] 00:23:37.221 bw ( KiB/s): min= 1944, max=103776, per=100.00%, avg=88600.44, stdev=16738.08, samples=107 00:23:37.221 iops : min= 486, max=25944, avg=22150.07, stdev=4184.55, samples=107 00:23:37.221 write: IOPS=19.9k, BW=77.7MiB/s (81.4MB/s)(4660MiB/60002msec); 0 zone resets 00:23:37.221 slat (nsec): min=1852, max=3994.9k, avg=8272.41, stdev=4737.70 00:23:37.221 clat (usec): min=766, max=6715.7k, avg=3279.09, stdev=50671.59 00:23:37.221 lat (usec): min=770, max=6715.8k, avg=3287.36, stdev=50671.60 00:23:37.221 clat percentiles (usec): 00:23:37.221 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:23:37.221 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:23:37.221 | 70.00th=[ 3163], 80.00th=[ 3490], 90.00th=[ 3720], 95.00th=[ 4015], 00:23:37.221 | 99.00th=[ 5342], 99.50th=[ 5997], 99.90th=[ 7635], 99.95th=[ 8291], 00:23:37.221 | 99.99th=[13566] 00:23:37.221 bw ( KiB/s): min= 2176, max=103288, per=100.00%, avg=88547.42, stdev=16551.35, samples=107 00:23:37.221 iops : min= 544, max=25822, avg=22136.81, stdev=4137.87, samples=107 00:23:37.221 lat (usec) : 1000=0.01% 00:23:37.221 lat (msec) : 2=0.61%, 4=94.35%, 10=5.01%, 20=0.02%, >=2000=0.01% 00:23:37.221 cpu : usr=11.82%, sys=32.39%, ctx=103467, majf=0, minf=13 00:23:37.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:37.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.221 issued rwts: total=1193784,1193019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.221 00:23:37.221 Run status group 0 (all jobs): 00:23:37.221 READ: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=4663MiB (4890MB), run=60002-60002msec 00:23:37.221 WRITE: bw=77.7MiB/s (81.4MB/s), 77.7MiB/s-77.7MiB/s (81.4MB/s-81.4MB/s), io=4660MiB (4887MB), run=60002-60002msec 00:23:37.221 00:23:37.221 Disk stats (read/write): 00:23:37.221 ublkb1: ios=1191645/1190828, merge=0/0, ticks=3636993/3666262, in_queue=7303256, util=99.94% 00:23:37.221 20:41:37 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.221 [2024-11-25 20:41:37.621637] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:37.221 [2024-11-25 20:41:37.656399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:37.221 [2024-11-25 20:41:37.656742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:37.221 [2024-11-25 20:41:37.664386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:37.221 [2024-11-25 20:41:37.664580] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:37.221 [2024-11-25 20:41:37.664595] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.221 20:41:37 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.221 [2024-11-25 20:41:37.680528] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:37.221 [2024-11-25 20:41:37.688418] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:37.221 [2024-11-25 20:41:37.688462] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.221 20:41:37 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:37.221 20:41:37 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:37.221 20:41:37 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76285 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76285 ']' 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76285 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76285 00:23:37.221 killing process with pid 76285 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76285' 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76285 00:23:37.221 20:41:37 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76285 00:23:37.221 [2024-11-25 20:41:39.368318] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:37.221 [2024-11-25 20:41:39.368423] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:37.221 ************************************ 00:23:37.221 END TEST ublk_recovery 00:23:37.221 ************************************ 00:23:37.221 00:23:37.221 real 1m6.435s 00:23:37.221 user 1m50.386s 00:23:37.221 sys 0m38.091s 00:23:37.221 20:41:40 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.221 20:41:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.221 20:41:40 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:37.221 20:41:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:37.221 20:41:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.221 20:41:40 -- common/autotest_common.sh@10 -- # set +x 00:23:37.221 20:41:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:37.221 20:41:40 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:37.221 20:41:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:37.221 20:41:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.221 20:41:40 -- common/autotest_common.sh@10 -- # set +x 00:23:37.221 ************************************ 00:23:37.221 START TEST ftl 00:23:37.221 ************************************ 00:23:37.221 20:41:40 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:37.221 * Looking for test storage... 00:23:37.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.222 20:41:41 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.222 20:41:41 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.222 20:41:41 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.222 20:41:41 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.222 20:41:41 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.222 20:41:41 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:37.222 20:41:41 ftl -- scripts/common.sh@345 -- # : 1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.222 20:41:41 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.222 20:41:41 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@353 -- # local d=1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.222 20:41:41 ftl -- scripts/common.sh@355 -- # echo 1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.222 20:41:41 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@353 -- # local d=2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.222 20:41:41 ftl -- scripts/common.sh@355 -- # echo 2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.222 20:41:41 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.222 20:41:41 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.222 20:41:41 ftl -- scripts/common.sh@368 -- # return 0 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.222 --rc genhtml_branch_coverage=1 00:23:37.222 --rc genhtml_function_coverage=1 00:23:37.222 --rc genhtml_legend=1 00:23:37.222 --rc geninfo_all_blocks=1 00:23:37.222 --rc geninfo_unexecuted_blocks=1 00:23:37.222 00:23:37.222 ' 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.222 --rc genhtml_branch_coverage=1 00:23:37.222 --rc genhtml_function_coverage=1 00:23:37.222 --rc genhtml_legend=1 00:23:37.222 --rc geninfo_all_blocks=1 00:23:37.222 --rc geninfo_unexecuted_blocks=1 00:23:37.222 00:23:37.222 ' 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.222 --rc genhtml_branch_coverage=1 00:23:37.222 --rc genhtml_function_coverage=1 00:23:37.222 --rc genhtml_legend=1 00:23:37.222 --rc geninfo_all_blocks=1 00:23:37.222 --rc geninfo_unexecuted_blocks=1 00:23:37.222 00:23:37.222 ' 00:23:37.222 20:41:41 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:37.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.222 --rc genhtml_branch_coverage=1 00:23:37.222 --rc genhtml_function_coverage=1 00:23:37.222 --rc genhtml_legend=1 00:23:37.222 --rc geninfo_all_blocks=1 00:23:37.222 --rc geninfo_unexecuted_blocks=1 00:23:37.222 00:23:37.222 ' 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:37.222 20:41:41 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:37.222 20:41:41 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:37.222 20:41:41 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:37.222 20:41:41 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:37.222 20:41:41 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:37.222 20:41:41 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.222 20:41:41 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:37.222 20:41:41 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:37.222 20:41:41 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.222 20:41:41 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.222 20:41:41 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:37.222 20:41:41 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:37.222 20:41:41 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:37.222 20:41:41 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:37.222 20:41:41 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:37.222 20:41:41 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:37.222 20:41:41 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.222 20:41:41 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:37.222 20:41:41 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:37.222 20:41:41 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:37.222 20:41:41 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:37.222 20:41:41 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:37.222 20:41:41 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:37.222 20:41:41 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:37.222 20:41:41 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:37.222 20:41:41 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:37.222 20:41:41 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:37.222 20:41:41 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:37.222 20:41:41 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:37.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:37.222 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:37.222 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:37.222 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:37.222 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:37.222 20:41:42 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77090 00:23:37.222 20:41:42 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:37.222 20:41:42 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77090 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@835 -- # '[' -z 77090 ']' 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:37.222 [2024-11-25 20:41:42.187753] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:23:37.222 [2024-11-25 20:41:42.188082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77090 ] 00:23:37.222 [2024-11-25 20:41:42.374525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.222 [2024-11-25 20:41:42.490471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.222 20:41:42 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:37.222 20:41:42 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:37.222 20:41:43 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@50 -- # break 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:37.222 20:41:44 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:37.222 20:41:45 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:37.222 20:41:45 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:37.222 20:41:45 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:37.222 20:41:45 ftl -- ftl/ftl.sh@63 -- # break 00:23:37.222 20:41:45 ftl -- ftl/ftl.sh@66 -- # killprocess 77090 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@954 -- # '[' -z 77090 ']' 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@958 -- # kill -0 77090 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@959 -- # uname 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77090 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.222 20:41:45 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.223 20:41:45 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77090' 00:23:37.223 killing process with pid 77090 00:23:37.223 20:41:45 ftl -- common/autotest_common.sh@973 -- # kill 77090 00:23:37.223 20:41:45 ftl -- common/autotest_common.sh@978 -- # wait 77090 00:23:39.759 20:41:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:39.759 20:41:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:39.759 20:41:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:39.759 20:41:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.759 20:41:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:39.759 ************************************ 00:23:39.759 START TEST ftl_fio_basic 00:23:39.759 ************************************ 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:39.759 * Looking for test storage... 00:23:39.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.759 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.760 --rc genhtml_branch_coverage=1 00:23:39.760 --rc genhtml_function_coverage=1 00:23:39.760 --rc genhtml_legend=1 00:23:39.760 --rc geninfo_all_blocks=1 00:23:39.760 --rc geninfo_unexecuted_blocks=1 00:23:39.760 00:23:39.760 ' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.760 --rc genhtml_branch_coverage=1 00:23:39.760 --rc genhtml_function_coverage=1 00:23:39.760 --rc genhtml_legend=1 00:23:39.760 --rc geninfo_all_blocks=1 00:23:39.760 --rc geninfo_unexecuted_blocks=1 00:23:39.760 00:23:39.760 ' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.760 --rc genhtml_branch_coverage=1 00:23:39.760 --rc genhtml_function_coverage=1 00:23:39.760 --rc genhtml_legend=1 00:23:39.760 --rc geninfo_all_blocks=1 00:23:39.760 --rc geninfo_unexecuted_blocks=1 00:23:39.760 00:23:39.760 ' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:39.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.760 --rc genhtml_branch_coverage=1 00:23:39.760 --rc genhtml_function_coverage=1 00:23:39.760 --rc genhtml_legend=1 00:23:39.760 --rc geninfo_all_blocks=1 00:23:39.760 --rc geninfo_unexecuted_blocks=1 00:23:39.760 00:23:39.760 ' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:39.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77233 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77233 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77233 ']' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.760 20:41:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:40.020 [2024-11-25 20:41:48.004070] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:23:40.020 [2024-11-25 20:41:48.004382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77233 ] 00:23:40.279 [2024-11-25 20:41:48.192254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.279 [2024-11-25 20:41:48.310233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.279 [2024-11-25 20:41:48.310349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.279 [2024-11-25 20:41:48.310397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:41.216 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:41.475 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:41.735 { 00:23:41.735 "name": "nvme0n1", 00:23:41.735 "aliases": [ 00:23:41.735 "8fdd2d28-c0ba-4ae9-adfe-8e85250e7a86" 00:23:41.735 ], 00:23:41.735 "product_name": "NVMe disk", 00:23:41.735 "block_size": 4096, 00:23:41.735 "num_blocks": 1310720, 00:23:41.735 "uuid": "8fdd2d28-c0ba-4ae9-adfe-8e85250e7a86", 00:23:41.735 "numa_id": -1, 00:23:41.735 "assigned_rate_limits": { 00:23:41.735 "rw_ios_per_sec": 0, 00:23:41.735 "rw_mbytes_per_sec": 0, 00:23:41.735 "r_mbytes_per_sec": 0, 00:23:41.735 "w_mbytes_per_sec": 0 00:23:41.735 }, 00:23:41.735 "claimed": false, 00:23:41.735 "zoned": false, 00:23:41.735 "supported_io_types": { 00:23:41.735 "read": true, 00:23:41.735 "write": true, 00:23:41.735 "unmap": true, 00:23:41.735 "flush": true, 00:23:41.735 "reset": true, 00:23:41.735 "nvme_admin": true, 00:23:41.735 "nvme_io": true, 00:23:41.735 "nvme_io_md": false, 00:23:41.735 "write_zeroes": true, 00:23:41.735 "zcopy": false, 00:23:41.735 "get_zone_info": false, 00:23:41.735 "zone_management": false, 00:23:41.735 "zone_append": false, 00:23:41.735 "compare": true, 00:23:41.735 "compare_and_write": false, 00:23:41.735 "abort": true, 00:23:41.735 "seek_hole": false, 00:23:41.735 "seek_data": false, 00:23:41.735 "copy": true, 00:23:41.735 "nvme_iov_md": false 00:23:41.735 }, 00:23:41.735 "driver_specific": { 00:23:41.735 "nvme": [ 00:23:41.735 { 00:23:41.735 "pci_address": "0000:00:11.0", 00:23:41.735 "trid": { 00:23:41.735 "trtype": "PCIe", 00:23:41.735 "traddr": "0000:00:11.0" 00:23:41.735 }, 00:23:41.735 "ctrlr_data": { 00:23:41.735 "cntlid": 0, 00:23:41.735 "vendor_id": "0x1b36", 00:23:41.735 "model_number": "QEMU NVMe Ctrl", 00:23:41.735 "serial_number": "12341", 00:23:41.735 "firmware_revision": "8.0.0", 00:23:41.735 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:41.735 "oacs": { 00:23:41.735 "security": 0, 00:23:41.735 "format": 1, 00:23:41.735 "firmware": 0, 00:23:41.735 "ns_manage": 1 00:23:41.735 }, 00:23:41.735 "multi_ctrlr": false, 00:23:41.735 "ana_reporting": false 00:23:41.735 }, 00:23:41.735 "vs": { 00:23:41.735 "nvme_version": "1.4" 00:23:41.735 }, 00:23:41.735 "ns_data": { 00:23:41.735 "id": 1, 00:23:41.735 "can_share": false 00:23:41.735 } 00:23:41.735 } 00:23:41.735 ], 00:23:41.735 "mp_policy": "active_passive" 00:23:41.735 } 00:23:41.735 } 00:23:41.735 ]' 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:41.735 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:41.995 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:41.995 20:41:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:42.255 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3a7da362-6399-47fd-9946-ce4573ba8edc 00:23:42.255 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3a7da362-6399-47fd-9946-ce4573ba8edc 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:42.514 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:42.515 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:42.774 { 00:23:42.774 "name": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:42.774 "aliases": [ 00:23:42.774 "lvs/nvme0n1p0" 00:23:42.774 ], 00:23:42.774 "product_name": "Logical Volume", 00:23:42.774 "block_size": 4096, 00:23:42.774 "num_blocks": 26476544, 00:23:42.774 "uuid": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:42.774 "assigned_rate_limits": { 00:23:42.774 "rw_ios_per_sec": 0, 00:23:42.774 "rw_mbytes_per_sec": 0, 00:23:42.774 "r_mbytes_per_sec": 0, 00:23:42.774 "w_mbytes_per_sec": 0 00:23:42.774 }, 00:23:42.774 "claimed": false, 00:23:42.774 "zoned": false, 00:23:42.774 "supported_io_types": { 00:23:42.774 "read": true, 00:23:42.774 "write": true, 00:23:42.774 "unmap": true, 00:23:42.774 "flush": false, 00:23:42.774 "reset": true, 00:23:42.774 "nvme_admin": false, 00:23:42.774 "nvme_io": false, 00:23:42.774 "nvme_io_md": false, 00:23:42.774 "write_zeroes": true, 00:23:42.774 "zcopy": false, 00:23:42.774 "get_zone_info": false, 00:23:42.774 "zone_management": false, 00:23:42.774 "zone_append": false, 00:23:42.774 "compare": false, 00:23:42.774 "compare_and_write": false, 00:23:42.774 "abort": false, 00:23:42.774 "seek_hole": true, 00:23:42.774 "seek_data": true, 00:23:42.774 "copy": false, 00:23:42.774 "nvme_iov_md": false 00:23:42.774 }, 00:23:42.774 "driver_specific": { 00:23:42.774 "lvol": { 00:23:42.774 "lvol_store_uuid": "3a7da362-6399-47fd-9946-ce4573ba8edc", 00:23:42.774 "base_bdev": "nvme0n1", 00:23:42.774 "thin_provision": true, 00:23:42.774 "num_allocated_clusters": 0, 00:23:42.774 "snapshot": false, 00:23:42.774 "clone": false, 00:23:42.774 "esnap_clone": false 00:23:42.774 } 00:23:42.774 } 00:23:42.774 } 00:23:42.774 ]' 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:42.774 20:41:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:43.034 20:41:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:43.034 20:41:51 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:43.035 20:41:51 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:43.035 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:43.035 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:43.035 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:43.035 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:43.035 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:43.294 { 00:23:43.294 "name": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:43.294 "aliases": [ 00:23:43.294 "lvs/nvme0n1p0" 00:23:43.294 ], 00:23:43.294 "product_name": "Logical Volume", 00:23:43.294 "block_size": 4096, 00:23:43.294 "num_blocks": 26476544, 00:23:43.294 "uuid": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:43.294 "assigned_rate_limits": { 00:23:43.294 "rw_ios_per_sec": 0, 00:23:43.294 "rw_mbytes_per_sec": 0, 00:23:43.294 "r_mbytes_per_sec": 0, 00:23:43.294 "w_mbytes_per_sec": 0 00:23:43.294 }, 00:23:43.294 "claimed": false, 00:23:43.294 "zoned": false, 00:23:43.294 "supported_io_types": { 00:23:43.294 "read": true, 00:23:43.294 "write": true, 00:23:43.294 "unmap": true, 00:23:43.294 "flush": false, 00:23:43.294 "reset": true, 00:23:43.294 "nvme_admin": false, 00:23:43.294 "nvme_io": false, 00:23:43.294 "nvme_io_md": false, 00:23:43.294 "write_zeroes": true, 00:23:43.294 "zcopy": false, 00:23:43.294 "get_zone_info": false, 00:23:43.294 "zone_management": false, 00:23:43.294 "zone_append": false, 00:23:43.294 "compare": false, 00:23:43.294 "compare_and_write": false, 00:23:43.294 "abort": false, 00:23:43.294 "seek_hole": true, 00:23:43.294 "seek_data": true, 00:23:43.294 "copy": false, 00:23:43.294 "nvme_iov_md": false 00:23:43.294 }, 00:23:43.294 "driver_specific": { 00:23:43.294 "lvol": { 00:23:43.294 "lvol_store_uuid": "3a7da362-6399-47fd-9946-ce4573ba8edc", 00:23:43.294 "base_bdev": "nvme0n1", 00:23:43.294 "thin_provision": true, 00:23:43.294 "num_allocated_clusters": 0, 00:23:43.294 "snapshot": false, 00:23:43.294 "clone": false, 00:23:43.294 "esnap_clone": false 00:23:43.294 } 00:23:43.294 } 00:23:43.294 } 00:23:43.294 ]' 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:43.294 20:41:51 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:43.554 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:43.554 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 231ae940-e9e8-4189-9a98-8a57ebd27142 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:43.814 { 00:23:43.814 "name": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:43.814 "aliases": [ 00:23:43.814 "lvs/nvme0n1p0" 00:23:43.814 ], 00:23:43.814 "product_name": "Logical Volume", 00:23:43.814 "block_size": 4096, 00:23:43.814 "num_blocks": 26476544, 00:23:43.814 "uuid": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:43.814 "assigned_rate_limits": { 00:23:43.814 "rw_ios_per_sec": 0, 00:23:43.814 "rw_mbytes_per_sec": 0, 00:23:43.814 "r_mbytes_per_sec": 0, 00:23:43.814 "w_mbytes_per_sec": 0 00:23:43.814 }, 00:23:43.814 "claimed": false, 00:23:43.814 "zoned": false, 00:23:43.814 "supported_io_types": { 00:23:43.814 "read": true, 00:23:43.814 "write": true, 00:23:43.814 "unmap": true, 00:23:43.814 "flush": false, 00:23:43.814 "reset": true, 00:23:43.814 "nvme_admin": false, 00:23:43.814 "nvme_io": false, 00:23:43.814 "nvme_io_md": false, 00:23:43.814 "write_zeroes": true, 00:23:43.814 "zcopy": false, 00:23:43.814 "get_zone_info": false, 00:23:43.814 "zone_management": false, 00:23:43.814 "zone_append": false, 00:23:43.814 "compare": false, 00:23:43.814 "compare_and_write": false, 00:23:43.814 "abort": false, 00:23:43.814 "seek_hole": true, 00:23:43.814 "seek_data": true, 00:23:43.814 "copy": false, 00:23:43.814 "nvme_iov_md": false 00:23:43.814 }, 00:23:43.814 "driver_specific": { 00:23:43.814 "lvol": { 00:23:43.814 "lvol_store_uuid": "3a7da362-6399-47fd-9946-ce4573ba8edc", 00:23:43.814 "base_bdev": "nvme0n1", 00:23:43.814 "thin_provision": true, 00:23:43.814 "num_allocated_clusters": 0, 00:23:43.814 "snapshot": false, 00:23:43.814 "clone": false, 00:23:43.814 "esnap_clone": false 00:23:43.814 } 00:23:43.814 } 00:23:43.814 } 00:23:43.814 ]' 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:43.814 20:41:51 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 231ae940-e9e8-4189-9a98-8a57ebd27142 -c nvc0n1p0 --l2p_dram_limit 60 00:23:44.074 [2024-11-25 20:41:52.019482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.019721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:44.074 [2024-11-25 20:41:52.019752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:44.074 [2024-11-25 20:41:52.019764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.019860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.019873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:44.074 [2024-11-25 20:41:52.019887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:44.074 [2024-11-25 20:41:52.019897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.019970] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:44.074 [2024-11-25 20:41:52.020974] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:44.074 [2024-11-25 20:41:52.021003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.021015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:44.074 [2024-11-25 20:41:52.021029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:23:44.074 [2024-11-25 20:41:52.021040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.021146] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49f9b8ea-0312-41d4-affe-cf4059355d27 00:23:44.074 [2024-11-25 20:41:52.022720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.022897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:44.074 [2024-11-25 20:41:52.022918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:44.074 [2024-11-25 20:41:52.022932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.030749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.030884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:44.074 [2024-11-25 20:41:52.031007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.692 ms 00:23:44.074 [2024-11-25 20:41:52.031057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.031224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.031348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:44.074 [2024-11-25 20:41:52.031393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:44.074 [2024-11-25 20:41:52.031433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.031591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.031638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:44.074 [2024-11-25 20:41:52.031831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:44.074 [2024-11-25 20:41:52.031871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.031952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.074 [2024-11-25 20:41:52.037432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.037565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:44.074 [2024-11-25 20:41:52.037679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.499 ms 00:23:44.074 [2024-11-25 20:41:52.037716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.074 [2024-11-25 20:41:52.037822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.074 [2024-11-25 20:41:52.037863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:44.074 [2024-11-25 20:41:52.037897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:44.075 [2024-11-25 20:41:52.037983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.075 [2024-11-25 20:41:52.038102] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:44.075 [2024-11-25 20:41:52.038274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:44.075 [2024-11-25 20:41:52.038399] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:44.075 [2024-11-25 20:41:52.038502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:44.075 [2024-11-25 20:41:52.038560] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:44.075 [2024-11-25 20:41:52.038610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:44.075 [2024-11-25 20:41:52.038664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:44.075 [2024-11-25 20:41:52.038757] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:44.075 [2024-11-25 20:41:52.038797] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:44.075 [2024-11-25 20:41:52.038827] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:44.075 [2024-11-25 20:41:52.038867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.075 [2024-11-25 20:41:52.038899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:44.075 [2024-11-25 20:41:52.038935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:23:44.075 [2024-11-25 20:41:52.039015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.075 [2024-11-25 20:41:52.039200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.075 [2024-11-25 20:41:52.039242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:44.075 [2024-11-25 20:41:52.039316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:44.075 [2024-11-25 20:41:52.039366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.075 [2024-11-25 20:41:52.039587] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:44.075 [2024-11-25 20:41:52.039632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:44.075 [2024-11-25 20:41:52.039667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.075 [2024-11-25 20:41:52.039780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.039852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:44.075 [2024-11-25 20:41:52.039885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.039917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:44.075 [2024-11-25 20:41:52.039948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:44.075 [2024-11-25 20:41:52.039980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:44.075 [2024-11-25 20:41:52.040009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.075 [2024-11-25 20:41:52.040042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:44.075 [2024-11-25 20:41:52.040071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:44.075 [2024-11-25 20:41:52.040102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.075 [2024-11-25 20:41:52.040338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:44.075 [2024-11-25 20:41:52.040384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:44.075 [2024-11-25 20:41:52.040415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.040454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:44.075 [2024-11-25 20:41:52.040484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:44.075 [2024-11-25 20:41:52.040516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.040547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:44.075 [2024-11-25 20:41:52.040579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:44.075 [2024-11-25 20:41:52.040607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.075 [2024-11-25 20:41:52.040794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:44.075 [2024-11-25 20:41:52.040829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:44.075 [2024-11-25 20:41:52.040862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.075 [2024-11-25 20:41:52.040892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:44.075 [2024-11-25 20:41:52.040923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:44.075 [2024-11-25 20:41:52.040952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.075 [2024-11-25 20:41:52.040984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:44.075 [2024-11-25 20:41:52.041013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:44.075 [2024-11-25 20:41:52.041187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.075 [2024-11-25 20:41:52.041221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:44.075 [2024-11-25 20:41:52.041257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:44.075 [2024-11-25 20:41:52.041309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.075 [2024-11-25 20:41:52.041358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:44.075 [2024-11-25 20:41:52.041390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:44.075 [2024-11-25 20:41:52.041485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.075 [2024-11-25 20:41:52.041649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:44.075 [2024-11-25 20:41:52.041689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:44.075 [2024-11-25 20:41:52.041720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.041752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:44.075 [2024-11-25 20:41:52.041782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:44.075 [2024-11-25 20:41:52.041819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.041849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:44.075 [2024-11-25 20:41:52.041865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:44.075 [2024-11-25 20:41:52.041875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.075 [2024-11-25 20:41:52.041889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.075 [2024-11-25 20:41:52.041900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:44.075 [2024-11-25 20:41:52.041915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:44.075 [2024-11-25 20:41:52.041925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:44.075 [2024-11-25 20:41:52.041937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:44.075 [2024-11-25 20:41:52.041947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:44.075 [2024-11-25 20:41:52.041958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:44.075 [2024-11-25 20:41:52.041973] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:44.075 [2024-11-25 20:41:52.041990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.075 [2024-11-25 20:41:52.042003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:44.075 [2024-11-25 20:41:52.042016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:44.075 [2024-11-25 20:41:52.042027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:44.075 [2024-11-25 20:41:52.042040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:44.075 [2024-11-25 20:41:52.042050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:44.075 [2024-11-25 20:41:52.042063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:44.075 [2024-11-25 20:41:52.042074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:44.075 [2024-11-25 20:41:52.042086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:44.076 [2024-11-25 20:41:52.042097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:44.076 [2024-11-25 20:41:52.042112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:44.076 [2024-11-25 20:41:52.042122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:44.076 [2024-11-25 20:41:52.042136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:44.076 [2024-11-25 20:41:52.042147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:44.076 [2024-11-25 20:41:52.042160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:44.076 [2024-11-25 20:41:52.042170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:44.076 [2024-11-25 20:41:52.042192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.076 [2024-11-25 20:41:52.042207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:44.076 [2024-11-25 20:41:52.042221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:44.076 [2024-11-25 20:41:52.042232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:44.076 [2024-11-25 20:41:52.042246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:44.076 [2024-11-25 20:41:52.042259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.076 [2024-11-25 20:41:52.042272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:44.076 [2024-11-25 20:41:52.042283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:23:44.076 [2024-11-25 20:41:52.042296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.076 [2024-11-25 20:41:52.042460] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:44.076 [2024-11-25 20:41:52.042482] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:49.350 [2024-11-25 20:41:57.077543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.077664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:49.350 [2024-11-25 20:41:57.077684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5043.256 ms 00:23:49.350 [2024-11-25 20:41:57.077700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.126388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.126459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.350 [2024-11-25 20:41:57.126479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.383 ms 00:23:49.350 [2024-11-25 20:41:57.126494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.126675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.126694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:49.350 [2024-11-25 20:41:57.126707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:49.350 [2024-11-25 20:41:57.126725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.196164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.196243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.350 [2024-11-25 20:41:57.196265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.456 ms 00:23:49.350 [2024-11-25 20:41:57.196283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.196373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.196417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.350 [2024-11-25 20:41:57.196433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.350 [2024-11-25 20:41:57.196450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.197319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.197362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.350 [2024-11-25 20:41:57.197382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:23:49.350 [2024-11-25 20:41:57.197399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.197606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.197629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.350 [2024-11-25 20:41:57.197644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:23:49.350 [2024-11-25 20:41:57.197665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.224130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.224180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.350 [2024-11-25 20:41:57.224195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.458 ms 00:23:49.350 [2024-11-25 20:41:57.224209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.237962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:49.350 [2024-11-25 20:41:57.263788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.264053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:49.350 [2024-11-25 20:41:57.264094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.478 ms 00:23:49.350 [2024-11-25 20:41:57.264106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.356865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.356932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:49.350 [2024-11-25 20:41:57.356976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.828 ms 00:23:49.350 [2024-11-25 20:41:57.356987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.357232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.357247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:49.350 [2024-11-25 20:41:57.357267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:23:49.350 [2024-11-25 20:41:57.357278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.394660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.350 [2024-11-25 20:41:57.394823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:49.350 [2024-11-25 20:41:57.394852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.332 ms 00:23:49.350 [2024-11-25 20:41:57.394863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.350 [2024-11-25 20:41:57.431291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.351 [2024-11-25 20:41:57.431336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:49.351 [2024-11-25 20:41:57.431357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.425 ms 00:23:49.351 [2024-11-25 20:41:57.431367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.351 [2024-11-25 20:41:57.432142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.351 [2024-11-25 20:41:57.432171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:49.351 [2024-11-25 20:41:57.432187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:23:49.351 [2024-11-25 20:41:57.432197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.610 [2024-11-25 20:41:57.563556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.611 [2024-11-25 20:41:57.563793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:49.611 [2024-11-25 20:41:57.563836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 131.486 ms 00:23:49.611 [2024-11-25 20:41:57.563848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.611 [2024-11-25 20:41:57.604056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.611 [2024-11-25 20:41:57.604108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:49.611 [2024-11-25 20:41:57.604129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.114 ms 00:23:49.611 [2024-11-25 20:41:57.604141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.611 [2024-11-25 20:41:57.640970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.611 [2024-11-25 20:41:57.641010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:49.611 [2024-11-25 20:41:57.641029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.826 ms 00:23:49.611 [2024-11-25 20:41:57.641039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.611 [2024-11-25 20:41:57.677847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.611 [2024-11-25 20:41:57.677889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:49.611 [2024-11-25 20:41:57.677908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.806 ms 00:23:49.611 [2024-11-25 20:41:57.677920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.611 [2024-11-25 20:41:57.677984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.611 [2024-11-25 20:41:57.677996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:49.611 [2024-11-25 20:41:57.678020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:49.611 [2024-11-25 20:41:57.678031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.611 [2024-11-25 20:41:57.678181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.611 [2024-11-25 20:41:57.678194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:49.611 [2024-11-25 20:41:57.678209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:49.611 [2024-11-25 20:41:57.678220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.611 [2024-11-25 20:41:57.679847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5668.989 ms, result 0 00:23:49.611 { 00:23:49.611 "name": "ftl0", 00:23:49.611 "uuid": "49f9b8ea-0312-41d4-affe-cf4059355d27" 00:23:49.611 } 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:49.611 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:49.870 20:41:57 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:50.151 [ 00:23:50.151 { 00:23:50.151 "name": "ftl0", 00:23:50.151 "aliases": [ 00:23:50.151 "49f9b8ea-0312-41d4-affe-cf4059355d27" 00:23:50.151 ], 00:23:50.151 "product_name": "FTL disk", 00:23:50.151 "block_size": 4096, 00:23:50.151 "num_blocks": 20971520, 00:23:50.151 "uuid": "49f9b8ea-0312-41d4-affe-cf4059355d27", 00:23:50.151 "assigned_rate_limits": { 00:23:50.151 "rw_ios_per_sec": 0, 00:23:50.151 "rw_mbytes_per_sec": 0, 00:23:50.151 "r_mbytes_per_sec": 0, 00:23:50.151 "w_mbytes_per_sec": 0 00:23:50.151 }, 00:23:50.151 "claimed": false, 00:23:50.151 "zoned": false, 00:23:50.151 "supported_io_types": { 00:23:50.151 "read": true, 00:23:50.151 "write": true, 00:23:50.151 "unmap": true, 00:23:50.151 "flush": true, 00:23:50.151 "reset": false, 00:23:50.151 "nvme_admin": false, 00:23:50.151 "nvme_io": false, 00:23:50.151 "nvme_io_md": false, 00:23:50.151 "write_zeroes": true, 00:23:50.151 "zcopy": false, 00:23:50.151 "get_zone_info": false, 00:23:50.151 "zone_management": false, 00:23:50.151 "zone_append": false, 00:23:50.151 "compare": false, 00:23:50.151 "compare_and_write": false, 00:23:50.151 "abort": false, 00:23:50.151 "seek_hole": false, 00:23:50.151 "seek_data": false, 00:23:50.151 "copy": false, 00:23:50.151 "nvme_iov_md": false 00:23:50.151 }, 00:23:50.151 "driver_specific": { 00:23:50.151 "ftl": { 00:23:50.151 "base_bdev": "231ae940-e9e8-4189-9a98-8a57ebd27142", 00:23:50.151 "cache": "nvc0n1p0" 00:23:50.151 } 00:23:50.151 } 00:23:50.151 } 00:23:50.151 ] 00:23:50.151 20:41:58 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:50.151 20:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:50.151 20:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:50.410 20:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:50.410 20:41:58 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:50.410 [2024-11-25 20:41:58.535150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.410 [2024-11-25 20:41:58.535232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:50.410 [2024-11-25 20:41:58.535252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:50.410 [2024-11-25 20:41:58.535267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.410 [2024-11-25 20:41:58.535315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:50.410 [2024-11-25 20:41:58.540137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.410 [2024-11-25 20:41:58.540176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:50.410 [2024-11-25 20:41:58.540194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:23:50.410 [2024-11-25 20:41:58.540206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.410 [2024-11-25 20:41:58.540814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.410 [2024-11-25 20:41:58.540834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:50.410 [2024-11-25 20:41:58.540850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:23:50.410 [2024-11-25 20:41:58.540861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.543438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.543461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:50.671 [2024-11-25 20:41:58.543477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:23:50.671 [2024-11-25 20:41:58.543490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.548609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.548666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:50.671 [2024-11-25 20:41:58.548684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.088 ms 00:23:50.671 [2024-11-25 20:41:58.548695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.586604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.586782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:50.671 [2024-11-25 20:41:58.586834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.870 ms 00:23:50.671 [2024-11-25 20:41:58.586846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.609919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.610072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:50.671 [2024-11-25 20:41:58.610109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.041 ms 00:23:50.671 [2024-11-25 20:41:58.610121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.610379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.610395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:50.671 [2024-11-25 20:41:58.610411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:23:50.671 [2024-11-25 20:41:58.610422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.647846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.647886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:50.671 [2024-11-25 20:41:58.647904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.448 ms 00:23:50.671 [2024-11-25 20:41:58.647931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.684462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.684501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:50.671 [2024-11-25 20:41:58.684519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.530 ms 00:23:50.671 [2024-11-25 20:41:58.684546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.720595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.720632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:50.671 [2024-11-25 20:41:58.720650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.042 ms 00:23:50.671 [2024-11-25 20:41:58.720660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.756762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.671 [2024-11-25 20:41:58.756817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:50.671 [2024-11-25 20:41:58.756835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.011 ms 00:23:50.671 [2024-11-25 20:41:58.756861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.671 [2024-11-25 20:41:58.756921] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:50.671 [2024-11-25 20:41:58.756941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.756958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.756970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.756986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.756997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:50.671 [2024-11-25 20:41:58.757522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.757993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:50.672 [2024-11-25 20:41:58.758412] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:50.672 [2024-11-25 20:41:58.758427] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f9b8ea-0312-41d4-affe-cf4059355d27 00:23:50.672 [2024-11-25 20:41:58.758439] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:50.672 [2024-11-25 20:41:58.758458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:50.672 [2024-11-25 20:41:58.758473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:50.672 [2024-11-25 20:41:58.758488] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:50.672 [2024-11-25 20:41:58.758498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:50.672 [2024-11-25 20:41:58.758513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:50.672 [2024-11-25 20:41:58.758525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:50.672 [2024-11-25 20:41:58.758538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:50.672 [2024-11-25 20:41:58.758548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:50.672 [2024-11-25 20:41:58.758562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.672 [2024-11-25 20:41:58.758574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:50.672 [2024-11-25 20:41:58.758589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:23:50.672 [2024-11-25 20:41:58.758600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.672 [2024-11-25 20:41:58.779842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.672 [2024-11-25 20:41:58.780000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:50.672 [2024-11-25 20:41:58.780026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.197 ms 00:23:50.672 [2024-11-25 20:41:58.780038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.672 [2024-11-25 20:41:58.780687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.672 [2024-11-25 20:41:58.780703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:50.672 [2024-11-25 20:41:58.780718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:23:50.672 [2024-11-25 20:41:58.780728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.932 [2024-11-25 20:41:58.855309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.932 [2024-11-25 20:41:58.855375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:50.932 [2024-11-25 20:41:58.855400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.932 [2024-11-25 20:41:58.855412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.932 [2024-11-25 20:41:58.855508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.932 [2024-11-25 20:41:58.855519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:50.932 [2024-11-25 20:41:58.855537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.932 [2024-11-25 20:41:58.855547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.932 [2024-11-25 20:41:58.855713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.932 [2024-11-25 20:41:58.855734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:50.932 [2024-11-25 20:41:58.855751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.932 [2024-11-25 20:41:58.855762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.932 [2024-11-25 20:41:58.855798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.932 [2024-11-25 20:41:58.855810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:50.932 [2024-11-25 20:41:58.855823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.932 [2024-11-25 20:41:58.855834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.932 [2024-11-25 20:41:59.002162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.932 [2024-11-25 20:41:59.002245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:50.932 [2024-11-25 20:41:59.002266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.932 [2024-11-25 20:41:59.002277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.111735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.199 [2024-11-25 20:41:59.112067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.112080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.112265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.199 [2024-11-25 20:41:59.112302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.112313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.112439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.199 [2024-11-25 20:41:59.112466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.112477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.112638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.199 [2024-11-25 20:41:59.112667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.112681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.112752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.199 [2024-11-25 20:41:59.112780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.112791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.112853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.199 [2024-11-25 20:41:59.112879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.112893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.112966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.199 [2024-11-25 20:41:59.112978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.199 [2024-11-25 20:41:59.112993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.199 [2024-11-25 20:41:59.113004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.199 [2024-11-25 20:41:59.113209] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 578.966 ms, result 0 00:23:51.199 true 00:23:51.199 20:41:59 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77233 00:23:51.199 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77233 ']' 00:23:51.199 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77233 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77233 00:23:51.200 killing process with pid 77233 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77233' 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77233 00:23:51.200 20:41:59 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77233 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:54.487 20:42:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:54.746 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:54.746 fio-3.35 00:23:54.746 Starting 1 thread 00:24:01.313 00:24:01.313 test: (groupid=0, jobs=1): err= 0: pid=77469: Mon Nov 25 20:42:08 2024 00:24:01.313 read: IOPS=965, BW=64.1MiB/s (67.3MB/s)(255MiB/3968msec) 00:24:01.313 slat (nsec): min=4187, max=23396, avg=5915.05, stdev=2246.04 00:24:01.313 clat (usec): min=294, max=649, avg=471.81, stdev=57.33 00:24:01.313 lat (usec): min=300, max=659, avg=477.72, stdev=57.50 00:24:01.313 clat percentiles (usec): 00:24:01.313 | 1.00th=[ 326], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 420], 00:24:01.313 | 30.00th=[ 457], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 482], 00:24:01.313 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 537], 00:24:01.313 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 644], 00:24:01.313 | 99.99th=[ 652] 00:24:01.313 write: IOPS=973, BW=64.6MiB/s (67.8MB/s)(256MiB/3963msec); 0 zone resets 00:24:01.313 slat (usec): min=15, max=100, avg=18.82, stdev= 4.35 00:24:01.313 clat (usec): min=344, max=949, avg=525.00, stdev=66.81 00:24:01.313 lat (usec): min=361, max=966, avg=543.82, stdev=67.11 00:24:01.313 clat percentiles (usec): 00:24:01.313 | 1.00th=[ 408], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 478], 00:24:01.313 | 30.00th=[ 486], 40.00th=[ 498], 50.00th=[ 545], 60.00th=[ 545], 00:24:01.313 | 70.00th=[ 553], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 611], 00:24:01.313 | 99.00th=[ 824], 99.50th=[ 873], 99.90th=[ 938], 99.95th=[ 938], 00:24:01.313 | 99.99th=[ 947] 00:24:01.313 bw ( KiB/s): min=63648, max=67592, per=100.00%, avg=66212.57, stdev=1279.24, samples=7 00:24:01.313 iops : min= 936, max= 994, avg=973.71, stdev=18.81, samples=7 00:24:01.313 lat (usec) : 500=51.63%, 750=47.59%, 1000=0.78% 00:24:01.313 cpu : usr=99.22%, sys=0.13%, ctx=12, majf=0, minf=1170 00:24:01.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.313 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:01.313 00:24:01.313 Run status group 0 (all jobs): 00:24:01.313 READ: bw=64.1MiB/s (67.3MB/s), 64.1MiB/s-64.1MiB/s (67.3MB/s-67.3MB/s), io=255MiB (267MB), run=3968-3968msec 00:24:01.313 WRITE: bw=64.6MiB/s (67.8MB/s), 64.6MiB/s-64.6MiB/s (67.8MB/s-67.8MB/s), io=256MiB (269MB), run=3963-3963msec 00:24:02.250 ----------------------------------------------------- 00:24:02.250 Suppressions used: 00:24:02.250 count bytes template 00:24:02.250 1 5 /usr/src/fio/parse.c 00:24:02.250 1 8 libtcmalloc_minimal.so 00:24:02.250 1 904 libcrypto.so 00:24:02.250 ----------------------------------------------------- 00:24:02.250 00:24:02.250 20:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:24:02.250 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.250 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:02.510 20:42:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:02.769 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:02.769 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:02.769 fio-3.35 00:24:02.769 Starting 2 threads 00:24:29.318 00:24:29.318 first_half: (groupid=0, jobs=1): err= 0: pid=77582: Mon Nov 25 20:42:37 2024 00:24:29.318 read: IOPS=2636, BW=10.3MiB/s (10.8MB/s)(255MiB/24771msec) 00:24:29.318 slat (nsec): min=3436, max=48330, avg=6473.41, stdev=2538.54 00:24:29.318 clat (usec): min=1092, max=286390, avg=37956.58, stdev=18634.93 00:24:29.319 lat (usec): min=1115, max=286395, avg=37963.06, stdev=18635.22 00:24:29.319 clat percentiles (msec): 00:24:29.319 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:24:29.319 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:24:29.319 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 57], 00:24:29.319 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 211], 99.95th=[ 213], 00:24:29.319 | 99.99th=[ 279] 00:24:29.319 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(256MiB/22019msec); 0 zone resets 00:24:29.319 slat (usec): min=4, max=1732, avg= 8.36, stdev=10.54 00:24:29.319 clat (usec): min=437, max=114046, avg=10529.06, stdev=18834.56 00:24:29.319 lat (usec): min=448, max=114073, avg=10537.42, stdev=18834.84 00:24:29.319 clat percentiles (usec): 00:24:29.319 | 1.00th=[ 1057], 5.00th=[ 1450], 10.00th=[ 1762], 20.00th=[ 2212], 00:24:29.319 | 30.00th=[ 3261], 40.00th=[ 4817], 50.00th=[ 5735], 60.00th=[ 6521], 00:24:29.319 | 70.00th=[ 7439], 80.00th=[ 11207], 90.00th=[ 14484], 95.00th=[ 37487], 00:24:29.319 | 99.00th=[100140], 99.50th=[104334], 99.90th=[107480], 99.95th=[109577], 00:24:29.319 | 99.99th=[112722] 00:24:29.319 bw ( KiB/s): min= 1392, max=39488, per=90.46%, avg=20971.52, stdev=12880.60, samples=25 00:24:29.319 iops : min= 348, max= 9872, avg=5242.88, stdev=3220.15, samples=25 00:24:29.319 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.30% 00:24:29.319 lat (msec) : 2=7.42%, 4=10.01%, 10=21.48%, 20=7.71%, 50=47.06% 00:24:29.319 lat (msec) : 100=4.37%, 250=1.57%, 500=0.01% 00:24:29.319 cpu : usr=99.15%, sys=0.26%, ctx=33, majf=0, minf=5601 00:24:29.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:29.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.319 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.319 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.319 second_half: (groupid=0, jobs=1): err= 0: pid=77583: Mon Nov 25 20:42:37 2024 00:24:29.319 read: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(255MiB/24970msec) 00:24:29.319 slat (nsec): min=3331, max=71241, avg=7340.49, stdev=3538.48 00:24:29.319 clat (usec): min=1134, max=293853, avg=37392.31, stdev=21304.22 00:24:29.319 lat (usec): min=1140, max=293859, avg=37399.65, stdev=21304.34 00:24:29.319 clat percentiles (msec): 00:24:29.319 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:24:29.319 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:24:29.319 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 53], 00:24:29.319 | 99.00th=[ 159], 99.50th=[ 176], 99.90th=[ 236], 99.95th=[ 271], 00:24:29.319 | 99.99th=[ 288] 00:24:29.319 write: IOPS=2897, BW=11.3MiB/s (11.9MB/s)(256MiB/22616msec); 0 zone resets 00:24:29.319 slat (usec): min=4, max=453, avg= 8.18, stdev= 4.66 00:24:29.319 clat (usec): min=503, max=115360, avg=11475.48, stdev=20097.80 00:24:29.319 lat (usec): min=510, max=115373, avg=11483.66, stdev=20098.10 00:24:29.319 clat percentiles (usec): 00:24:29.319 | 1.00th=[ 996], 5.00th=[ 1287], 10.00th=[ 1500], 20.00th=[ 1811], 00:24:29.319 | 30.00th=[ 2212], 40.00th=[ 3523], 50.00th=[ 5145], 60.00th=[ 6456], 00:24:29.319 | 70.00th=[ 8455], 80.00th=[ 12387], 90.00th=[ 28967], 95.00th=[ 49546], 00:24:29.319 | 99.00th=[101188], 99.50th=[105382], 99.90th=[111674], 99.95th=[113771], 00:24:29.319 | 99.99th=[114820] 00:24:29.319 bw ( KiB/s): min= 936, max=52312, per=90.47%, avg=20974.52, stdev=14791.33, samples=25 00:24:29.319 iops : min= 234, max=13078, avg=5243.60, stdev=3697.80, samples=25 00:24:29.319 lat (usec) : 750=0.07%, 1000=0.45% 00:24:29.319 lat (msec) : 2=12.16%, 4=9.05%, 10=15.72%, 20=8.75%, 50=48.58% 00:24:29.319 lat (msec) : 100=3.19%, 250=1.99%, 500=0.04% 00:24:29.319 cpu : usr=99.18%, sys=0.17%, ctx=81, majf=0, minf=5510 00:24:29.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:29.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.319 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.319 issued rwts: total=65319,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.319 00:24:29.319 Run status group 0 (all jobs): 00:24:29.319 READ: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=510MiB (535MB), run=24771-24970msec 00:24:29.319 WRITE: bw=22.6MiB/s (23.7MB/s), 11.3MiB/s-11.6MiB/s (11.9MB/s-12.2MB/s), io=512MiB (537MB), run=22019-22616msec 00:24:31.851 ----------------------------------------------------- 00:24:31.851 Suppressions used: 00:24:31.851 count bytes template 00:24:31.851 2 10 /usr/src/fio/parse.c 00:24:31.851 4 384 /usr/src/fio/iolog.c 00:24:31.851 1 8 libtcmalloc_minimal.so 00:24:31.851 1 904 libcrypto.so 00:24:31.851 ----------------------------------------------------- 00:24:31.851 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:31.851 20:42:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:32.110 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:32.110 fio-3.35 00:24:32.110 Starting 1 thread 00:24:50.204 00:24:50.204 test: (groupid=0, jobs=1): err= 0: pid=77908: Mon Nov 25 20:42:55 2024 00:24:50.204 read: IOPS=7042, BW=27.5MiB/s (28.8MB/s)(255MiB/9259msec) 00:24:50.204 slat (nsec): min=3195, max=63079, avg=7669.54, stdev=3754.10 00:24:50.204 clat (usec): min=693, max=33124, avg=18163.84, stdev=991.43 00:24:50.204 lat (usec): min=709, max=33129, avg=18171.51, stdev=991.96 00:24:50.204 clat percentiles (usec): 00:24:50.204 | 1.00th=[16712], 5.00th=[16909], 10.00th=[17171], 20.00th=[17433], 00:24:50.204 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18220], 00:24:50.204 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19006], 95.00th=[19268], 00:24:50.204 | 99.00th=[21627], 99.50th=[21890], 99.90th=[24511], 99.95th=[29230], 00:24:50.204 | 99.99th=[32900] 00:24:50.204 write: IOPS=12.7k, BW=49.7MiB/s (52.1MB/s)(256MiB/5155msec); 0 zone resets 00:24:50.204 slat (usec): min=4, max=477, avg= 8.20, stdev= 6.07 00:24:50.204 clat (usec): min=595, max=62201, avg=10018.98, stdev=12492.40 00:24:50.204 lat (usec): min=602, max=62214, avg=10027.18, stdev=12492.50 00:24:50.204 clat percentiles (usec): 00:24:50.204 | 1.00th=[ 1012], 5.00th=[ 1221], 10.00th=[ 1385], 20.00th=[ 1582], 00:24:50.204 | 30.00th=[ 1778], 40.00th=[ 2180], 50.00th=[ 6325], 60.00th=[ 7570], 00:24:50.204 | 70.00th=[ 8586], 80.00th=[10421], 90.00th=[36439], 95.00th=[38536], 00:24:50.204 | 99.00th=[43254], 99.50th=[44827], 99.90th=[57410], 99.95th=[58983], 00:24:50.204 | 99.99th=[60556] 00:24:50.204 bw ( KiB/s): min=11112, max=69512, per=93.72%, avg=47661.82, stdev=15105.21, samples=11 00:24:50.204 iops : min= 2778, max=17378, avg=11915.64, stdev=3776.24, samples=11 00:24:50.204 lat (usec) : 750=0.02%, 1000=0.43% 00:24:50.204 lat (msec) : 2=18.31%, 4=2.32%, 10=18.04%, 20=51.80%, 50=8.96% 00:24:50.204 lat (msec) : 100=0.11% 00:24:50.204 cpu : usr=98.69%, sys=0.42%, ctx=36, majf=0, minf=5565 00:24:50.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:50.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.204 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.204 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.204 00:24:50.204 Run status group 0 (all jobs): 00:24:50.204 READ: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=255MiB (267MB), run=9259-9259msec 00:24:50.204 WRITE: bw=49.7MiB/s (52.1MB/s), 49.7MiB/s-49.7MiB/s (52.1MB/s-52.1MB/s), io=256MiB (268MB), run=5155-5155msec 00:24:50.204 ----------------------------------------------------- 00:24:50.204 Suppressions used: 00:24:50.204 count bytes template 00:24:50.204 1 5 /usr/src/fio/parse.c 00:24:50.204 2 192 /usr/src/fio/iolog.c 00:24:50.204 1 8 libtcmalloc_minimal.so 00:24:50.204 1 904 libcrypto.so 00:24:50.204 ----------------------------------------------------- 00:24:50.204 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:50.204 Remove shared memory files 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57961 /dev/shm/spdk_tgt_trace.pid76133 00:24:50.204 20:42:58 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:50.205 20:42:58 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:50.205 ************************************ 00:24:50.205 END TEST ftl_fio_basic 00:24:50.205 ************************************ 00:24:50.205 00:24:50.205 real 1m10.673s 00:24:50.205 user 2m31.230s 00:24:50.205 sys 0m4.448s 00:24:50.205 20:42:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.205 20:42:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:50.464 20:42:58 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:50.464 20:42:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:50.464 20:42:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.464 20:42:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:50.464 ************************************ 00:24:50.464 START TEST ftl_bdevperf 00:24:50.464 ************************************ 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:50.464 * Looking for test storage... 00:24:50.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.464 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.724 --rc genhtml_branch_coverage=1 00:24:50.724 --rc genhtml_function_coverage=1 00:24:50.724 --rc genhtml_legend=1 00:24:50.724 --rc geninfo_all_blocks=1 00:24:50.724 --rc geninfo_unexecuted_blocks=1 00:24:50.724 00:24:50.724 ' 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.724 --rc genhtml_branch_coverage=1 00:24:50.724 --rc genhtml_function_coverage=1 00:24:50.724 --rc genhtml_legend=1 00:24:50.724 --rc geninfo_all_blocks=1 00:24:50.724 --rc geninfo_unexecuted_blocks=1 00:24:50.724 00:24:50.724 ' 00:24:50.724 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:50.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.725 --rc genhtml_branch_coverage=1 00:24:50.725 --rc genhtml_function_coverage=1 00:24:50.725 --rc genhtml_legend=1 00:24:50.725 --rc geninfo_all_blocks=1 00:24:50.725 --rc geninfo_unexecuted_blocks=1 00:24:50.725 00:24:50.725 ' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:50.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.725 --rc genhtml_branch_coverage=1 00:24:50.725 --rc genhtml_function_coverage=1 00:24:50.725 --rc genhtml_legend=1 00:24:50.725 --rc geninfo_all_blocks=1 00:24:50.725 --rc geninfo_unexecuted_blocks=1 00:24:50.725 00:24:50.725 ' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78162 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78162 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78162 ']' 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.725 20:42:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:50.725 [2024-11-25 20:42:58.733114] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:24:50.725 [2024-11-25 20:42:58.733490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78162 ] 00:24:50.985 [2024-11-25 20:42:58.918606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.985 [2024-11-25 20:42:59.058685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:51.551 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:51.810 20:42:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:52.069 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:52.069 { 00:24:52.069 "name": "nvme0n1", 00:24:52.069 "aliases": [ 00:24:52.069 "4e131148-8af4-4338-8411-958e15b335b3" 00:24:52.069 ], 00:24:52.069 "product_name": "NVMe disk", 00:24:52.069 "block_size": 4096, 00:24:52.069 "num_blocks": 1310720, 00:24:52.069 "uuid": "4e131148-8af4-4338-8411-958e15b335b3", 00:24:52.069 "numa_id": -1, 00:24:52.069 "assigned_rate_limits": { 00:24:52.069 "rw_ios_per_sec": 0, 00:24:52.069 "rw_mbytes_per_sec": 0, 00:24:52.069 "r_mbytes_per_sec": 0, 00:24:52.069 "w_mbytes_per_sec": 0 00:24:52.069 }, 00:24:52.069 "claimed": true, 00:24:52.069 "claim_type": "read_many_write_one", 00:24:52.069 "zoned": false, 00:24:52.069 "supported_io_types": { 00:24:52.069 "read": true, 00:24:52.069 "write": true, 00:24:52.069 "unmap": true, 00:24:52.069 "flush": true, 00:24:52.069 "reset": true, 00:24:52.069 "nvme_admin": true, 00:24:52.069 "nvme_io": true, 00:24:52.069 "nvme_io_md": false, 00:24:52.069 "write_zeroes": true, 00:24:52.069 "zcopy": false, 00:24:52.069 "get_zone_info": false, 00:24:52.069 "zone_management": false, 00:24:52.069 "zone_append": false, 00:24:52.069 "compare": true, 00:24:52.069 "compare_and_write": false, 00:24:52.069 "abort": true, 00:24:52.069 "seek_hole": false, 00:24:52.069 "seek_data": false, 00:24:52.069 "copy": true, 00:24:52.069 "nvme_iov_md": false 00:24:52.069 }, 00:24:52.069 "driver_specific": { 00:24:52.069 "nvme": [ 00:24:52.069 { 00:24:52.069 "pci_address": "0000:00:11.0", 00:24:52.069 "trid": { 00:24:52.069 "trtype": "PCIe", 00:24:52.069 "traddr": "0000:00:11.0" 00:24:52.069 }, 00:24:52.069 "ctrlr_data": { 00:24:52.069 "cntlid": 0, 00:24:52.069 "vendor_id": "0x1b36", 00:24:52.069 "model_number": "QEMU NVMe Ctrl", 00:24:52.069 "serial_number": "12341", 00:24:52.069 "firmware_revision": "8.0.0", 00:24:52.069 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:52.069 "oacs": { 00:24:52.069 "security": 0, 00:24:52.069 "format": 1, 00:24:52.069 "firmware": 0, 00:24:52.069 "ns_manage": 1 00:24:52.069 }, 00:24:52.069 "multi_ctrlr": false, 00:24:52.069 "ana_reporting": false 00:24:52.069 }, 00:24:52.069 "vs": { 00:24:52.069 "nvme_version": "1.4" 00:24:52.069 }, 00:24:52.069 "ns_data": { 00:24:52.069 "id": 1, 00:24:52.069 "can_share": false 00:24:52.069 } 00:24:52.069 } 00:24:52.069 ], 00:24:52.069 "mp_policy": "active_passive" 00:24:52.069 } 00:24:52.069 } 00:24:52.069 ]' 00:24:52.069 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:52.069 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:52.069 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3a7da362-6399-47fd-9946-ce4573ba8edc 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:52.329 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a7da362-6399-47fd-9946-ce4573ba8edc 00:24:52.588 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:52.847 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=fa1bc79d-e31c-4c87-8064-5b671f9a2309 00:24:52.847 20:43:00 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fa1bc79d-e31c-4c87-8064-5b671f9a2309 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:53.107 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:53.366 { 00:24:53.366 "name": "88de0496-7705-4021-9772-bc3c0aa94023", 00:24:53.366 "aliases": [ 00:24:53.366 "lvs/nvme0n1p0" 00:24:53.366 ], 00:24:53.366 "product_name": "Logical Volume", 00:24:53.366 "block_size": 4096, 00:24:53.366 "num_blocks": 26476544, 00:24:53.366 "uuid": "88de0496-7705-4021-9772-bc3c0aa94023", 00:24:53.366 "assigned_rate_limits": { 00:24:53.366 "rw_ios_per_sec": 0, 00:24:53.366 "rw_mbytes_per_sec": 0, 00:24:53.366 "r_mbytes_per_sec": 0, 00:24:53.366 "w_mbytes_per_sec": 0 00:24:53.366 }, 00:24:53.366 "claimed": false, 00:24:53.366 "zoned": false, 00:24:53.366 "supported_io_types": { 00:24:53.366 "read": true, 00:24:53.366 "write": true, 00:24:53.366 "unmap": true, 00:24:53.366 "flush": false, 00:24:53.366 "reset": true, 00:24:53.366 "nvme_admin": false, 00:24:53.366 "nvme_io": false, 00:24:53.366 "nvme_io_md": false, 00:24:53.366 "write_zeroes": true, 00:24:53.366 "zcopy": false, 00:24:53.366 "get_zone_info": false, 00:24:53.366 "zone_management": false, 00:24:53.366 "zone_append": false, 00:24:53.366 "compare": false, 00:24:53.366 "compare_and_write": false, 00:24:53.366 "abort": false, 00:24:53.366 "seek_hole": true, 00:24:53.366 "seek_data": true, 00:24:53.366 "copy": false, 00:24:53.366 "nvme_iov_md": false 00:24:53.366 }, 00:24:53.366 "driver_specific": { 00:24:53.366 "lvol": { 00:24:53.366 "lvol_store_uuid": "fa1bc79d-e31c-4c87-8064-5b671f9a2309", 00:24:53.366 "base_bdev": "nvme0n1", 00:24:53.366 "thin_provision": true, 00:24:53.366 "num_allocated_clusters": 0, 00:24:53.366 "snapshot": false, 00:24:53.366 "clone": false, 00:24:53.366 "esnap_clone": false 00:24:53.366 } 00:24:53.366 } 00:24:53.366 } 00:24:53.366 ]' 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:53.366 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:53.625 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88de0496-7705-4021-9772-bc3c0aa94023 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:53.885 { 00:24:53.885 "name": "88de0496-7705-4021-9772-bc3c0aa94023", 00:24:53.885 "aliases": [ 00:24:53.885 "lvs/nvme0n1p0" 00:24:53.885 ], 00:24:53.885 "product_name": "Logical Volume", 00:24:53.885 "block_size": 4096, 00:24:53.885 "num_blocks": 26476544, 00:24:53.885 "uuid": "88de0496-7705-4021-9772-bc3c0aa94023", 00:24:53.885 "assigned_rate_limits": { 00:24:53.885 "rw_ios_per_sec": 0, 00:24:53.885 "rw_mbytes_per_sec": 0, 00:24:53.885 "r_mbytes_per_sec": 0, 00:24:53.885 "w_mbytes_per_sec": 0 00:24:53.885 }, 00:24:53.885 "claimed": false, 00:24:53.885 "zoned": false, 00:24:53.885 "supported_io_types": { 00:24:53.885 "read": true, 00:24:53.885 "write": true, 00:24:53.885 "unmap": true, 00:24:53.885 "flush": false, 00:24:53.885 "reset": true, 00:24:53.885 "nvme_admin": false, 00:24:53.885 "nvme_io": false, 00:24:53.885 "nvme_io_md": false, 00:24:53.885 "write_zeroes": true, 00:24:53.885 "zcopy": false, 00:24:53.885 "get_zone_info": false, 00:24:53.885 "zone_management": false, 00:24:53.885 "zone_append": false, 00:24:53.885 "compare": false, 00:24:53.885 "compare_and_write": false, 00:24:53.885 "abort": false, 00:24:53.885 "seek_hole": true, 00:24:53.885 "seek_data": true, 00:24:53.885 "copy": false, 00:24:53.885 "nvme_iov_md": false 00:24:53.885 }, 00:24:53.885 "driver_specific": { 00:24:53.885 "lvol": { 00:24:53.885 "lvol_store_uuid": "fa1bc79d-e31c-4c87-8064-5b671f9a2309", 00:24:53.885 "base_bdev": "nvme0n1", 00:24:53.885 "thin_provision": true, 00:24:53.885 "num_allocated_clusters": 0, 00:24:53.885 "snapshot": false, 00:24:53.885 "clone": false, 00:24:53.885 "esnap_clone": false 00:24:53.885 } 00:24:53.885 } 00:24:53.885 } 00:24:53.885 ]' 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:53.885 20:43:01 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 88de0496-7705-4021-9772-bc3c0aa94023 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=88de0496-7705-4021-9772-bc3c0aa94023 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:54.145 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88de0496-7705-4021-9772-bc3c0aa94023 00:24:54.404 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:54.404 { 00:24:54.404 "name": "88de0496-7705-4021-9772-bc3c0aa94023", 00:24:54.404 "aliases": [ 00:24:54.404 "lvs/nvme0n1p0" 00:24:54.404 ], 00:24:54.405 "product_name": "Logical Volume", 00:24:54.405 "block_size": 4096, 00:24:54.405 "num_blocks": 26476544, 00:24:54.405 "uuid": "88de0496-7705-4021-9772-bc3c0aa94023", 00:24:54.405 "assigned_rate_limits": { 00:24:54.405 "rw_ios_per_sec": 0, 00:24:54.405 "rw_mbytes_per_sec": 0, 00:24:54.405 "r_mbytes_per_sec": 0, 00:24:54.405 "w_mbytes_per_sec": 0 00:24:54.405 }, 00:24:54.405 "claimed": false, 00:24:54.405 "zoned": false, 00:24:54.405 "supported_io_types": { 00:24:54.405 "read": true, 00:24:54.405 "write": true, 00:24:54.405 "unmap": true, 00:24:54.405 "flush": false, 00:24:54.405 "reset": true, 00:24:54.405 "nvme_admin": false, 00:24:54.405 "nvme_io": false, 00:24:54.405 "nvme_io_md": false, 00:24:54.405 "write_zeroes": true, 00:24:54.405 "zcopy": false, 00:24:54.405 "get_zone_info": false, 00:24:54.405 "zone_management": false, 00:24:54.405 "zone_append": false, 00:24:54.405 "compare": false, 00:24:54.405 "compare_and_write": false, 00:24:54.405 "abort": false, 00:24:54.405 "seek_hole": true, 00:24:54.405 "seek_data": true, 00:24:54.405 "copy": false, 00:24:54.405 "nvme_iov_md": false 00:24:54.405 }, 00:24:54.405 "driver_specific": { 00:24:54.405 "lvol": { 00:24:54.405 "lvol_store_uuid": "fa1bc79d-e31c-4c87-8064-5b671f9a2309", 00:24:54.405 "base_bdev": "nvme0n1", 00:24:54.405 "thin_provision": true, 00:24:54.405 "num_allocated_clusters": 0, 00:24:54.405 "snapshot": false, 00:24:54.405 "clone": false, 00:24:54.405 "esnap_clone": false 00:24:54.405 } 00:24:54.405 } 00:24:54.405 } 00:24:54.405 ]' 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:54.405 20:43:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 88de0496-7705-4021-9772-bc3c0aa94023 -c nvc0n1p0 --l2p_dram_limit 20 00:24:54.666 [2024-11-25 20:43:02.682773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.683065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:54.666 [2024-11-25 20:43:02.683096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:54.666 [2024-11-25 20:43:02.683111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.683208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.683230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.666 [2024-11-25 20:43:02.683242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:54.666 [2024-11-25 20:43:02.683257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.683280] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:54.666 [2024-11-25 20:43:02.684445] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:54.666 [2024-11-25 20:43:02.684471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.684486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.666 [2024-11-25 20:43:02.684497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:24:54.666 [2024-11-25 20:43:02.684511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.684594] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e46c7914-af16-4b82-aabf-57b7e9f75e95 00:24:54.666 [2024-11-25 20:43:02.686991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.687032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:54.666 [2024-11-25 20:43:02.687057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:54.666 [2024-11-25 20:43:02.687068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.701460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.701674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.666 [2024-11-25 20:43:02.701710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.328 ms 00:24:54.666 [2024-11-25 20:43:02.701722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.701860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.701875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.666 [2024-11-25 20:43:02.701894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:54.666 [2024-11-25 20:43:02.701905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.701983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.701996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:54.666 [2024-11-25 20:43:02.702010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:54.666 [2024-11-25 20:43:02.702020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.702052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:54.666 [2024-11-25 20:43:02.708769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.708917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.666 [2024-11-25 20:43:02.708939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.742 ms 00:24:54.666 [2024-11-25 20:43:02.708961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.708999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.709014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:54.666 [2024-11-25 20:43:02.709026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:54.666 [2024-11-25 20:43:02.709039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.709077] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:54.666 [2024-11-25 20:43:02.709225] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:54.666 [2024-11-25 20:43:02.709240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:54.666 [2024-11-25 20:43:02.709258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:54.666 [2024-11-25 20:43:02.709272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:54.666 [2024-11-25 20:43:02.709287] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:54.666 [2024-11-25 20:43:02.709298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:54.666 [2024-11-25 20:43:02.709312] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:54.666 [2024-11-25 20:43:02.709323] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:54.666 [2024-11-25 20:43:02.709352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:54.666 [2024-11-25 20:43:02.709363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.709381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:54.666 [2024-11-25 20:43:02.709393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:24:54.666 [2024-11-25 20:43:02.709407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.709481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.666 [2024-11-25 20:43:02.709497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:54.666 [2024-11-25 20:43:02.709508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:54.666 [2024-11-25 20:43:02.709525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.666 [2024-11-25 20:43:02.709617] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:54.666 [2024-11-25 20:43:02.709634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:54.666 [2024-11-25 20:43:02.709649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.666 [2024-11-25 20:43:02.709664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.666 [2024-11-25 20:43:02.709675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:54.666 [2024-11-25 20:43:02.709689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:54.666 [2024-11-25 20:43:02.709698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:54.666 [2024-11-25 20:43:02.709711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:54.666 [2024-11-25 20:43:02.709721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:54.666 [2024-11-25 20:43:02.709733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.666 [2024-11-25 20:43:02.709742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:54.666 [2024-11-25 20:43:02.709767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:54.666 [2024-11-25 20:43:02.709778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.666 [2024-11-25 20:43:02.709793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:54.666 [2024-11-25 20:43:02.709803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:54.666 [2024-11-25 20:43:02.709819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.667 [2024-11-25 20:43:02.709829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:54.667 [2024-11-25 20:43:02.709842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:54.667 [2024-11-25 20:43:02.709851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.667 [2024-11-25 20:43:02.709866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:54.667 [2024-11-25 20:43:02.709877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:54.667 [2024-11-25 20:43:02.709889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.667 [2024-11-25 20:43:02.709899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:54.667 [2024-11-25 20:43:02.709911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:54.667 [2024-11-25 20:43:02.709921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.667 [2024-11-25 20:43:02.709933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:54.667 [2024-11-25 20:43:02.709943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:54.667 [2024-11-25 20:43:02.709955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.667 [2024-11-25 20:43:02.709964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:54.667 [2024-11-25 20:43:02.709977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:54.667 [2024-11-25 20:43:02.709986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.667 [2024-11-25 20:43:02.710001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:54.667 [2024-11-25 20:43:02.710010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:54.667 [2024-11-25 20:43:02.710023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.667 [2024-11-25 20:43:02.710032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:54.667 [2024-11-25 20:43:02.710045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:54.667 [2024-11-25 20:43:02.710054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.667 [2024-11-25 20:43:02.710067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:54.667 [2024-11-25 20:43:02.710076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:54.667 [2024-11-25 20:43:02.710089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.667 [2024-11-25 20:43:02.710098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:54.667 [2024-11-25 20:43:02.710111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:54.667 [2024-11-25 20:43:02.710120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.667 [2024-11-25 20:43:02.710132] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:54.667 [2024-11-25 20:43:02.710142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:54.667 [2024-11-25 20:43:02.710158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.667 [2024-11-25 20:43:02.710169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.667 [2024-11-25 20:43:02.710188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:54.667 [2024-11-25 20:43:02.710199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:54.667 [2024-11-25 20:43:02.710211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:54.667 [2024-11-25 20:43:02.710222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:54.667 [2024-11-25 20:43:02.710234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:54.667 [2024-11-25 20:43:02.710244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:54.667 [2024-11-25 20:43:02.710261] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:54.667 [2024-11-25 20:43:02.710275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:54.667 [2024-11-25 20:43:02.710301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:54.667 [2024-11-25 20:43:02.710314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:54.667 [2024-11-25 20:43:02.710335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:54.667 [2024-11-25 20:43:02.710350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:54.667 [2024-11-25 20:43:02.710360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:54.667 [2024-11-25 20:43:02.710374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:54.667 [2024-11-25 20:43:02.710385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:54.667 [2024-11-25 20:43:02.710402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:54.667 [2024-11-25 20:43:02.710413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:54.667 [2024-11-25 20:43:02.710485] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:54.667 [2024-11-25 20:43:02.710497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.667 [2024-11-25 20:43:02.710528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:54.667 [2024-11-25 20:43:02.710542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:54.667 [2024-11-25 20:43:02.710553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:54.667 [2024-11-25 20:43:02.710567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.667 [2024-11-25 20:43:02.710579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:54.667 [2024-11-25 20:43:02.710594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:24:54.667 [2024-11-25 20:43:02.710605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.667 [2024-11-25 20:43:02.710651] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:54.667 [2024-11-25 20:43:02.710664] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:57.962 [2024-11-25 20:43:06.029269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.962 [2024-11-25 20:43:06.029374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:57.962 [2024-11-25 20:43:06.029399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3323.993 ms 00:24:57.962 [2024-11-25 20:43:06.029426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.962 [2024-11-25 20:43:06.076178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.962 [2024-11-25 20:43:06.076469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.962 [2024-11-25 20:43:06.076506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.448 ms 00:24:57.962 [2024-11-25 20:43:06.076518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.962 [2024-11-25 20:43:06.076672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.962 [2024-11-25 20:43:06.076687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:57.962 [2024-11-25 20:43:06.076706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:57.962 [2024-11-25 20:43:06.076717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.141559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.141620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.222 [2024-11-25 20:43:06.141641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.876 ms 00:24:58.222 [2024-11-25 20:43:06.141653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.141699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.141711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.222 [2024-11-25 20:43:06.141725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:58.222 [2024-11-25 20:43:06.141740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.142610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.142631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.222 [2024-11-25 20:43:06.142646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:24:58.222 [2024-11-25 20:43:06.142657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.142782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.142795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.222 [2024-11-25 20:43:06.142813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:58.222 [2024-11-25 20:43:06.142823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.166410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.166450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.222 [2024-11-25 20:43:06.166468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.599 ms 00:24:58.222 [2024-11-25 20:43:06.166493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.180750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:58.222 [2024-11-25 20:43:06.190533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.190569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:58.222 [2024-11-25 20:43:06.190583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.990 ms 00:24:58.222 [2024-11-25 20:43:06.190598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.279866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.279959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:58.222 [2024-11-25 20:43:06.279994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.374 ms 00:24:58.222 [2024-11-25 20:43:06.280010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.280229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.280252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:58.222 [2024-11-25 20:43:06.280265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:24:58.222 [2024-11-25 20:43:06.280284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.316513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.316560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:58.222 [2024-11-25 20:43:06.316576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.209 ms 00:24:58.222 [2024-11-25 20:43:06.316590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.351060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.351216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:58.222 [2024-11-25 20:43:06.351239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.485 ms 00:24:58.222 [2024-11-25 20:43:06.351253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.222 [2024-11-25 20:43:06.352093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.222 [2024-11-25 20:43:06.352121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:58.222 [2024-11-25 20:43:06.352134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:24:58.222 [2024-11-25 20:43:06.352148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.450541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.482 [2024-11-25 20:43:06.450610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:58.482 [2024-11-25 20:43:06.450628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.495 ms 00:24:58.482 [2024-11-25 20:43:06.450643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.490009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.482 [2024-11-25 20:43:06.490066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:58.482 [2024-11-25 20:43:06.490087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.345 ms 00:24:58.482 [2024-11-25 20:43:06.490102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.527214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.482 [2024-11-25 20:43:06.527263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:58.482 [2024-11-25 20:43:06.527279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.128 ms 00:24:58.482 [2024-11-25 20:43:06.527292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.564124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.482 [2024-11-25 20:43:06.564184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:58.482 [2024-11-25 20:43:06.564216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.837 ms 00:24:58.482 [2024-11-25 20:43:06.564231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.564276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.482 [2024-11-25 20:43:06.564296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:58.482 [2024-11-25 20:43:06.564308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:58.482 [2024-11-25 20:43:06.564321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.564460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.482 [2024-11-25 20:43:06.564477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:58.482 [2024-11-25 20:43:06.564489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:58.482 [2024-11-25 20:43:06.564503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.482 [2024-11-25 20:43:06.565928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3888.954 ms, result 0 00:24:58.482 { 00:24:58.482 "name": "ftl0", 00:24:58.482 "uuid": "e46c7914-af16-4b82-aabf-57b7e9f75e95" 00:24:58.482 } 00:24:58.482 20:43:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:58.482 20:43:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:58.482 20:43:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:58.742 20:43:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:59.001 [2024-11-25 20:43:06.889804] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:59.001 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:59.001 Zero copy mechanism will not be used. 00:24:59.001 Running I/O for 4 seconds... 00:25:00.880 1519.00 IOPS, 100.87 MiB/s [2024-11-25T20:43:09.955Z] 1530.00 IOPS, 101.60 MiB/s [2024-11-25T20:43:11.336Z] 1592.33 IOPS, 105.74 MiB/s [2024-11-25T20:43:11.336Z] 1637.75 IOPS, 108.76 MiB/s 00:25:03.200 Latency(us) 00:25:03.200 [2024-11-25T20:43:11.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.200 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:25:03.200 ftl0 : 4.00 1637.23 108.72 0.00 0.00 640.17 235.23 8527.58 00:25:03.200 [2024-11-25T20:43:11.336Z] =================================================================================================================== 00:25:03.200 [2024-11-25T20:43:11.336Z] Total : 1637.23 108.72 0.00 0.00 640.17 235.23 8527.58 00:25:03.200 [2024-11-25 20:43:10.895985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:03.200 { 00:25:03.200 "results": [ 00:25:03.200 { 00:25:03.200 "job": "ftl0", 00:25:03.200 "core_mask": "0x1", 00:25:03.200 "workload": "randwrite", 00:25:03.200 "status": "finished", 00:25:03.200 "queue_depth": 1, 00:25:03.200 "io_size": 69632, 00:25:03.200 "runtime": 4.001889, 00:25:03.200 "iops": 1637.2268196344276, 00:25:03.200 "mibps": 108.7220934913487, 00:25:03.200 "io_failed": 0, 00:25:03.200 "io_timeout": 0, 00:25:03.200 "avg_latency_us": 640.1665818340517, 00:25:03.200 "min_latency_us": 235.23212851405623, 00:25:03.200 "max_latency_us": 8527.575903614457 00:25:03.200 } 00:25:03.200 ], 00:25:03.200 "core_count": 1 00:25:03.200 } 00:25:03.200 20:43:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:25:03.200 [2024-11-25 20:43:11.029829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:03.200 Running I/O for 4 seconds... 00:25:05.076 11953.00 IOPS, 46.69 MiB/s [2024-11-25T20:43:14.148Z] 11761.00 IOPS, 45.94 MiB/s [2024-11-25T20:43:15.086Z] 11560.00 IOPS, 45.16 MiB/s [2024-11-25T20:43:15.086Z] 11436.25 IOPS, 44.67 MiB/s 00:25:06.950 Latency(us) 00:25:06.950 [2024-11-25T20:43:15.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.950 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:25:06.950 ftl0 : 4.02 11422.62 44.62 0.00 0.00 11182.33 243.46 23792.99 00:25:06.950 [2024-11-25T20:43:15.086Z] =================================================================================================================== 00:25:06.950 [2024-11-25T20:43:15.086Z] Total : 11422.62 44.62 0.00 0.00 11182.33 0.00 23792.99 00:25:06.950 [2024-11-25 20:43:15.050653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:06.950 { 00:25:06.950 "results": [ 00:25:06.950 { 00:25:06.950 "job": "ftl0", 00:25:06.950 "core_mask": "0x1", 00:25:06.950 "workload": "randwrite", 00:25:06.950 "status": "finished", 00:25:06.950 "queue_depth": 128, 00:25:06.950 "io_size": 4096, 00:25:06.950 "runtime": 4.015978, 00:25:06.950 "iops": 11422.622335082513, 00:25:06.950 "mibps": 44.61961849641607, 00:25:06.950 "io_failed": 0, 00:25:06.950 "io_timeout": 0, 00:25:06.950 "avg_latency_us": 11182.330761451842, 00:25:06.950 "min_latency_us": 243.4570281124498, 00:25:06.950 "max_latency_us": 23792.98955823293 00:25:06.950 } 00:25:06.950 ], 00:25:06.950 "core_count": 1 00:25:06.950 } 00:25:07.209 20:43:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:25:07.209 [2024-11-25 20:43:15.177083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:07.209 Running I/O for 4 seconds... 00:25:09.082 8460.00 IOPS, 33.05 MiB/s [2024-11-25T20:43:18.596Z] 8433.00 IOPS, 32.94 MiB/s [2024-11-25T20:43:19.534Z] 8504.67 IOPS, 33.22 MiB/s [2024-11-25T20:43:19.534Z] 8465.25 IOPS, 33.07 MiB/s 00:25:11.398 Latency(us) 00:25:11.398 [2024-11-25T20:43:19.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.398 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.398 Verification LBA range: start 0x0 length 0x1400000 00:25:11.398 ftl0 : 4.01 8476.40 33.11 0.00 0.00 15055.15 264.84 31794.17 00:25:11.398 [2024-11-25T20:43:19.534Z] =================================================================================================================== 00:25:11.398 [2024-11-25T20:43:19.534Z] Total : 8476.40 33.11 0.00 0.00 15055.15 0.00 31794.17 00:25:11.398 { 00:25:11.398 "results": [ 00:25:11.398 { 00:25:11.398 "job": "ftl0", 00:25:11.398 "core_mask": "0x1", 00:25:11.398 "workload": "verify", 00:25:11.398 "status": "finished", 00:25:11.398 "verify_range": { 00:25:11.398 "start": 0, 00:25:11.398 "length": 20971520 00:25:11.398 }, 00:25:11.398 "queue_depth": 128, 00:25:11.398 "io_size": 4096, 00:25:11.398 "runtime": 4.009721, 00:25:11.398 "iops": 8476.400228345065, 00:25:11.398 "mibps": 33.11093839197291, 00:25:11.398 "io_failed": 0, 00:25:11.398 "io_timeout": 0, 00:25:11.398 "avg_latency_us": 15055.15462797406, 00:25:11.398 "min_latency_us": 264.8417670682731, 00:25:11.398 "max_latency_us": 31794.1718875502 00:25:11.398 } 00:25:11.398 ], 00:25:11.398 "core_count": 1 00:25:11.398 } 00:25:11.398 [2024-11-25 20:43:19.201486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:11.398 20:43:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:11.398 [2024-11-25 20:43:19.417890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.398 [2024-11-25 20:43:19.417955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:11.398 [2024-11-25 20:43:19.417991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:11.398 [2024-11-25 20:43:19.418009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.398 [2024-11-25 20:43:19.418040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:11.398 [2024-11-25 20:43:19.422726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.398 [2024-11-25 20:43:19.422919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:11.398 [2024-11-25 20:43:19.422948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:25:11.398 [2024-11-25 20:43:19.422959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.398 [2024-11-25 20:43:19.424951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.398 [2024-11-25 20:43:19.424989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:11.398 [2024-11-25 20:43:19.425012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.947 ms 00:25:11.398 [2024-11-25 20:43:19.425023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.641250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.641322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:11.658 [2024-11-25 20:43:19.641361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.550 ms 00:25:11.658 [2024-11-25 20:43:19.641373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.646487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.646523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:11.658 [2024-11-25 20:43:19.646541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.075 ms 00:25:11.658 [2024-11-25 20:43:19.646557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.683142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.683183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:11.658 [2024-11-25 20:43:19.683202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.581 ms 00:25:11.658 [2024-11-25 20:43:19.683228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.705549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.705730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:11.658 [2024-11-25 20:43:19.705763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.309 ms 00:25:11.658 [2024-11-25 20:43:19.705778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.705942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.705959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:11.658 [2024-11-25 20:43:19.705981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:11.658 [2024-11-25 20:43:19.705995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.741927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.741969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:11.658 [2024-11-25 20:43:19.742005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.963 ms 00:25:11.658 [2024-11-25 20:43:19.742019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.658 [2024-11-25 20:43:19.776250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.658 [2024-11-25 20:43:19.776299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:11.658 [2024-11-25 20:43:19.776317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.236 ms 00:25:11.658 [2024-11-25 20:43:19.776356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.919 [2024-11-25 20:43:19.811067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.919 [2024-11-25 20:43:19.811102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:11.919 [2024-11-25 20:43:19.811118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.720 ms 00:25:11.919 [2024-11-25 20:43:19.811144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.919 [2024-11-25 20:43:19.846206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.919 [2024-11-25 20:43:19.846364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:11.919 [2024-11-25 20:43:19.846394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.021 ms 00:25:11.919 [2024-11-25 20:43:19.846405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.919 [2024-11-25 20:43:19.846445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:11.919 [2024-11-25 20:43:19.846464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.846995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:11.919 [2024-11-25 20:43:19.847623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:11.920 [2024-11-25 20:43:19.847864] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:11.920 [2024-11-25 20:43:19.847878] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e46c7914-af16-4b82-aabf-57b7e9f75e95 00:25:11.920 [2024-11-25 20:43:19.847893] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:11.920 [2024-11-25 20:43:19.847906] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:11.920 [2024-11-25 20:43:19.847916] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:11.920 [2024-11-25 20:43:19.847931] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:11.920 [2024-11-25 20:43:19.847941] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:11.920 [2024-11-25 20:43:19.847957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:11.920 [2024-11-25 20:43:19.847966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:11.920 [2024-11-25 20:43:19.847982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:11.920 [2024-11-25 20:43:19.847991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:11.920 [2024-11-25 20:43:19.848004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.920 [2024-11-25 20:43:19.848015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:11.920 [2024-11-25 20:43:19.848029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:25:11.920 [2024-11-25 20:43:19.848039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.920 [2024-11-25 20:43:19.868476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.920 [2024-11-25 20:43:19.868510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:11.920 [2024-11-25 20:43:19.868527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.412 ms 00:25:11.920 [2024-11-25 20:43:19.868554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.920 [2024-11-25 20:43:19.869137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.920 [2024-11-25 20:43:19.869151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:11.920 [2024-11-25 20:43:19.869166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:25:11.920 [2024-11-25 20:43:19.869180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.920 [2024-11-25 20:43:19.926548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.920 [2024-11-25 20:43:19.926597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.920 [2024-11-25 20:43:19.926620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.920 [2024-11-25 20:43:19.926631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.920 [2024-11-25 20:43:19.926712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.920 [2024-11-25 20:43:19.926724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.920 [2024-11-25 20:43:19.926739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.920 [2024-11-25 20:43:19.926753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.920 [2024-11-25 20:43:19.926863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.920 [2024-11-25 20:43:19.926879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.920 [2024-11-25 20:43:19.926893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.920 [2024-11-25 20:43:19.926903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.920 [2024-11-25 20:43:19.926926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.920 [2024-11-25 20:43:19.926937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.920 [2024-11-25 20:43:19.926952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.920 [2024-11-25 20:43:19.926963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.064489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.064560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.179 [2024-11-25 20:43:20.064601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.064613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.168056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.168126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:12.179 [2024-11-25 20:43:20.168163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.168175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.168595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.168651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.179 [2024-11-25 20:43:20.168689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.168721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.168920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.168941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.179 [2024-11-25 20:43:20.168956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.168967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.169114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.169132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.179 [2024-11-25 20:43:20.169151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.169162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.169205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.169218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:12.179 [2024-11-25 20:43:20.169232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.169242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.169292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.169307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.179 [2024-11-25 20:43:20.169321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.169355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.169427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.179 [2024-11-25 20:43:20.169443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.179 [2024-11-25 20:43:20.169471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.179 [2024-11-25 20:43:20.169481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.179 [2024-11-25 20:43:20.169658] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 752.916 ms, result 0 00:25:12.179 true 00:25:12.179 20:43:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78162 00:25:12.179 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78162 ']' 00:25:12.179 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78162 00:25:12.179 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:12.179 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.179 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78162 00:25:12.179 killing process with pid 78162 00:25:12.179 Received shutdown signal, test time was about 4.000000 seconds 00:25:12.179 00:25:12.180 Latency(us) 00:25:12.180 [2024-11-25T20:43:20.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.180 [2024-11-25T20:43:20.316Z] =================================================================================================================== 00:25:12.180 [2024-11-25T20:43:20.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.180 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.180 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.180 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78162' 00:25:12.180 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78162 00:25:12.180 20:43:20 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78162 00:25:13.560 Remove shared memory files 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:13.560 ************************************ 00:25:13.560 END TEST ftl_bdevperf 00:25:13.560 ************************************ 00:25:13.560 00:25:13.560 real 0m23.278s 00:25:13.560 user 0m25.755s 00:25:13.560 sys 0m1.453s 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.560 20:43:21 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:13.817 20:43:21 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:13.817 20:43:21 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:13.817 20:43:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.817 20:43:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:13.817 ************************************ 00:25:13.817 START TEST ftl_trim 00:25:13.817 ************************************ 00:25:13.817 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:13.817 * Looking for test storage... 00:25:13.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.817 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.817 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.817 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.817 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.817 20:43:21 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.075 20:43:21 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:14.075 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.075 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:14.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.075 --rc genhtml_branch_coverage=1 00:25:14.075 --rc genhtml_function_coverage=1 00:25:14.075 --rc genhtml_legend=1 00:25:14.075 --rc geninfo_all_blocks=1 00:25:14.075 --rc geninfo_unexecuted_blocks=1 00:25:14.075 00:25:14.075 ' 00:25:14.075 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:14.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.075 --rc genhtml_branch_coverage=1 00:25:14.075 --rc genhtml_function_coverage=1 00:25:14.075 --rc genhtml_legend=1 00:25:14.075 --rc geninfo_all_blocks=1 00:25:14.075 --rc geninfo_unexecuted_blocks=1 00:25:14.075 00:25:14.075 ' 00:25:14.075 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:14.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.075 --rc genhtml_branch_coverage=1 00:25:14.075 --rc genhtml_function_coverage=1 00:25:14.075 --rc genhtml_legend=1 00:25:14.075 --rc geninfo_all_blocks=1 00:25:14.075 --rc geninfo_unexecuted_blocks=1 00:25:14.075 00:25:14.075 ' 00:25:14.075 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:14.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.075 --rc genhtml_branch_coverage=1 00:25:14.075 --rc genhtml_function_coverage=1 00:25:14.075 --rc genhtml_legend=1 00:25:14.075 --rc geninfo_all_blocks=1 00:25:14.075 --rc geninfo_unexecuted_blocks=1 00:25:14.075 00:25:14.075 ' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:14.075 20:43:21 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78516 00:25:14.076 20:43:21 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:14.076 20:43:21 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78516 00:25:14.076 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78516 ']' 00:25:14.076 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.076 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.076 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.076 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.076 20:43:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:14.076 [2024-11-25 20:43:22.111123] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:25:14.076 [2024-11-25 20:43:22.111489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78516 ] 00:25:14.335 [2024-11-25 20:43:22.298594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.335 [2024-11-25 20:43:22.431017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.335 [2024-11-25 20:43:22.431157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.335 [2024-11-25 20:43:22.431195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:15.714 20:43:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:15.714 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:16.047 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:16.047 { 00:25:16.047 "name": "nvme0n1", 00:25:16.047 "aliases": [ 00:25:16.047 "666bde6c-f198-4ff1-b759-a172e9117b09" 00:25:16.047 ], 00:25:16.047 "product_name": "NVMe disk", 00:25:16.047 "block_size": 4096, 00:25:16.047 "num_blocks": 1310720, 00:25:16.047 "uuid": "666bde6c-f198-4ff1-b759-a172e9117b09", 00:25:16.047 "numa_id": -1, 00:25:16.047 "assigned_rate_limits": { 00:25:16.047 "rw_ios_per_sec": 0, 00:25:16.047 "rw_mbytes_per_sec": 0, 00:25:16.047 "r_mbytes_per_sec": 0, 00:25:16.047 "w_mbytes_per_sec": 0 00:25:16.047 }, 00:25:16.047 "claimed": true, 00:25:16.047 "claim_type": "read_many_write_one", 00:25:16.047 "zoned": false, 00:25:16.047 "supported_io_types": { 00:25:16.047 "read": true, 00:25:16.047 "write": true, 00:25:16.047 "unmap": true, 00:25:16.047 "flush": true, 00:25:16.047 "reset": true, 00:25:16.047 "nvme_admin": true, 00:25:16.047 "nvme_io": true, 00:25:16.047 "nvme_io_md": false, 00:25:16.047 "write_zeroes": true, 00:25:16.047 "zcopy": false, 00:25:16.047 "get_zone_info": false, 00:25:16.047 "zone_management": false, 00:25:16.047 "zone_append": false, 00:25:16.047 "compare": true, 00:25:16.047 "compare_and_write": false, 00:25:16.047 "abort": true, 00:25:16.047 "seek_hole": false, 00:25:16.047 "seek_data": false, 00:25:16.047 "copy": true, 00:25:16.047 "nvme_iov_md": false 00:25:16.047 }, 00:25:16.047 "driver_specific": { 00:25:16.047 "nvme": [ 00:25:16.047 { 00:25:16.047 "pci_address": "0000:00:11.0", 00:25:16.047 "trid": { 00:25:16.047 "trtype": "PCIe", 00:25:16.047 "traddr": "0000:00:11.0" 00:25:16.047 }, 00:25:16.047 "ctrlr_data": { 00:25:16.047 "cntlid": 0, 00:25:16.047 "vendor_id": "0x1b36", 00:25:16.047 "model_number": "QEMU NVMe Ctrl", 00:25:16.047 "serial_number": "12341", 00:25:16.047 "firmware_revision": "8.0.0", 00:25:16.047 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:16.047 "oacs": { 00:25:16.047 "security": 0, 00:25:16.047 "format": 1, 00:25:16.047 "firmware": 0, 00:25:16.047 "ns_manage": 1 00:25:16.047 }, 00:25:16.047 "multi_ctrlr": false, 00:25:16.047 "ana_reporting": false 00:25:16.047 }, 00:25:16.047 "vs": { 00:25:16.047 "nvme_version": "1.4" 00:25:16.047 }, 00:25:16.047 "ns_data": { 00:25:16.047 "id": 1, 00:25:16.047 "can_share": false 00:25:16.047 } 00:25:16.047 } 00:25:16.047 ], 00:25:16.047 "mp_policy": "active_passive" 00:25:16.047 } 00:25:16.047 } 00:25:16.047 ]' 00:25:16.047 20:43:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:16.047 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:16.047 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:16.047 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:16.047 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:16.047 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:25:16.047 20:43:24 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:16.047 20:43:24 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:16.047 20:43:24 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:16.047 20:43:24 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:16.047 20:43:24 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:16.335 20:43:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=fa1bc79d-e31c-4c87-8064-5b671f9a2309 00:25:16.335 20:43:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:16.335 20:43:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa1bc79d-e31c-4c87-8064-5b671f9a2309 00:25:16.594 20:43:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=922045c8-773e-4c53-a06c-f6742f30badb 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 922045c8-773e-4c53-a06c-f6742f30badb 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2d19a822-89ee-4773-a22a-13c66caed4da 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2d19a822-89ee-4773-a22a-13c66caed4da 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:16.853 20:43:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:16.853 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d19a822-89ee-4773-a22a-13c66caed4da 00:25:16.853 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:16.853 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:16.853 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:16.853 20:43:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:17.112 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:17.112 { 00:25:17.112 "name": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:17.112 "aliases": [ 00:25:17.112 "lvs/nvme0n1p0" 00:25:17.112 ], 00:25:17.112 "product_name": "Logical Volume", 00:25:17.112 "block_size": 4096, 00:25:17.112 "num_blocks": 26476544, 00:25:17.112 "uuid": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:17.112 "assigned_rate_limits": { 00:25:17.112 "rw_ios_per_sec": 0, 00:25:17.112 "rw_mbytes_per_sec": 0, 00:25:17.112 "r_mbytes_per_sec": 0, 00:25:17.112 "w_mbytes_per_sec": 0 00:25:17.112 }, 00:25:17.113 "claimed": false, 00:25:17.113 "zoned": false, 00:25:17.113 "supported_io_types": { 00:25:17.113 "read": true, 00:25:17.113 "write": true, 00:25:17.113 "unmap": true, 00:25:17.113 "flush": false, 00:25:17.113 "reset": true, 00:25:17.113 "nvme_admin": false, 00:25:17.113 "nvme_io": false, 00:25:17.113 "nvme_io_md": false, 00:25:17.113 "write_zeroes": true, 00:25:17.113 "zcopy": false, 00:25:17.113 "get_zone_info": false, 00:25:17.113 "zone_management": false, 00:25:17.113 "zone_append": false, 00:25:17.113 "compare": false, 00:25:17.113 "compare_and_write": false, 00:25:17.113 "abort": false, 00:25:17.113 "seek_hole": true, 00:25:17.113 "seek_data": true, 00:25:17.113 "copy": false, 00:25:17.113 "nvme_iov_md": false 00:25:17.113 }, 00:25:17.113 "driver_specific": { 00:25:17.113 "lvol": { 00:25:17.113 "lvol_store_uuid": "922045c8-773e-4c53-a06c-f6742f30badb", 00:25:17.113 "base_bdev": "nvme0n1", 00:25:17.113 "thin_provision": true, 00:25:17.113 "num_allocated_clusters": 0, 00:25:17.113 "snapshot": false, 00:25:17.113 "clone": false, 00:25:17.113 "esnap_clone": false 00:25:17.113 } 00:25:17.113 } 00:25:17.113 } 00:25:17.113 ]' 00:25:17.113 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:17.113 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:17.113 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:17.372 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:17.372 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:17.372 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:17.372 20:43:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:17.372 20:43:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:17.372 20:43:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:17.631 20:43:25 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:17.631 20:43:25 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:17.631 20:43:25 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:17.631 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d19a822-89ee-4773-a22a-13c66caed4da 00:25:17.631 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:17.631 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:17.631 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:17.631 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:17.890 { 00:25:17.890 "name": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:17.890 "aliases": [ 00:25:17.890 "lvs/nvme0n1p0" 00:25:17.890 ], 00:25:17.890 "product_name": "Logical Volume", 00:25:17.890 "block_size": 4096, 00:25:17.890 "num_blocks": 26476544, 00:25:17.890 "uuid": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:17.890 "assigned_rate_limits": { 00:25:17.890 "rw_ios_per_sec": 0, 00:25:17.890 "rw_mbytes_per_sec": 0, 00:25:17.890 "r_mbytes_per_sec": 0, 00:25:17.890 "w_mbytes_per_sec": 0 00:25:17.890 }, 00:25:17.890 "claimed": false, 00:25:17.890 "zoned": false, 00:25:17.890 "supported_io_types": { 00:25:17.890 "read": true, 00:25:17.890 "write": true, 00:25:17.890 "unmap": true, 00:25:17.890 "flush": false, 00:25:17.890 "reset": true, 00:25:17.890 "nvme_admin": false, 00:25:17.890 "nvme_io": false, 00:25:17.890 "nvme_io_md": false, 00:25:17.890 "write_zeroes": true, 00:25:17.890 "zcopy": false, 00:25:17.890 "get_zone_info": false, 00:25:17.890 "zone_management": false, 00:25:17.890 "zone_append": false, 00:25:17.890 "compare": false, 00:25:17.890 "compare_and_write": false, 00:25:17.890 "abort": false, 00:25:17.890 "seek_hole": true, 00:25:17.890 "seek_data": true, 00:25:17.890 "copy": false, 00:25:17.890 "nvme_iov_md": false 00:25:17.890 }, 00:25:17.890 "driver_specific": { 00:25:17.890 "lvol": { 00:25:17.890 "lvol_store_uuid": "922045c8-773e-4c53-a06c-f6742f30badb", 00:25:17.890 "base_bdev": "nvme0n1", 00:25:17.890 "thin_provision": true, 00:25:17.890 "num_allocated_clusters": 0, 00:25:17.890 "snapshot": false, 00:25:17.890 "clone": false, 00:25:17.890 "esnap_clone": false 00:25:17.890 } 00:25:17.890 } 00:25:17.890 } 00:25:17.890 ]' 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:17.890 20:43:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:17.890 20:43:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:17.890 20:43:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:18.149 20:43:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:18.149 20:43:26 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:18.149 20:43:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:18.149 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2d19a822-89ee-4773-a22a-13c66caed4da 00:25:18.149 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:18.149 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:18.149 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:18.149 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d19a822-89ee-4773-a22a-13c66caed4da 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:18.408 { 00:25:18.408 "name": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:18.408 "aliases": [ 00:25:18.408 "lvs/nvme0n1p0" 00:25:18.408 ], 00:25:18.408 "product_name": "Logical Volume", 00:25:18.408 "block_size": 4096, 00:25:18.408 "num_blocks": 26476544, 00:25:18.408 "uuid": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:18.408 "assigned_rate_limits": { 00:25:18.408 "rw_ios_per_sec": 0, 00:25:18.408 "rw_mbytes_per_sec": 0, 00:25:18.408 "r_mbytes_per_sec": 0, 00:25:18.408 "w_mbytes_per_sec": 0 00:25:18.408 }, 00:25:18.408 "claimed": false, 00:25:18.408 "zoned": false, 00:25:18.408 "supported_io_types": { 00:25:18.408 "read": true, 00:25:18.408 "write": true, 00:25:18.408 "unmap": true, 00:25:18.408 "flush": false, 00:25:18.408 "reset": true, 00:25:18.408 "nvme_admin": false, 00:25:18.408 "nvme_io": false, 00:25:18.408 "nvme_io_md": false, 00:25:18.408 "write_zeroes": true, 00:25:18.408 "zcopy": false, 00:25:18.408 "get_zone_info": false, 00:25:18.408 "zone_management": false, 00:25:18.408 "zone_append": false, 00:25:18.408 "compare": false, 00:25:18.408 "compare_and_write": false, 00:25:18.408 "abort": false, 00:25:18.408 "seek_hole": true, 00:25:18.408 "seek_data": true, 00:25:18.408 "copy": false, 00:25:18.408 "nvme_iov_md": false 00:25:18.408 }, 00:25:18.408 "driver_specific": { 00:25:18.408 "lvol": { 00:25:18.408 "lvol_store_uuid": "922045c8-773e-4c53-a06c-f6742f30badb", 00:25:18.408 "base_bdev": "nvme0n1", 00:25:18.408 "thin_provision": true, 00:25:18.408 "num_allocated_clusters": 0, 00:25:18.408 "snapshot": false, 00:25:18.408 "clone": false, 00:25:18.408 "esnap_clone": false 00:25:18.408 } 00:25:18.408 } 00:25:18.408 } 00:25:18.408 ]' 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:18.408 20:43:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:18.408 20:43:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:18.409 20:43:26 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2d19a822-89ee-4773-a22a-13c66caed4da -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:18.669 [2024-11-25 20:43:26.578374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.578436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:18.669 [2024-11-25 20:43:26.578457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:18.669 [2024-11-25 20:43:26.578469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.582075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.582116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.669 [2024-11-25 20:43:26.582133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.573 ms 00:25:18.669 [2024-11-25 20:43:26.582143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.582280] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:18.669 [2024-11-25 20:43:26.583293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:18.669 [2024-11-25 20:43:26.583319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.583343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.669 [2024-11-25 20:43:26.583358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:25:18.669 [2024-11-25 20:43:26.583369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.583513] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:25:18.669 [2024-11-25 20:43:26.585979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.586019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:18.669 [2024-11-25 20:43:26.586033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:18.669 [2024-11-25 20:43:26.586047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.600861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.600912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.669 [2024-11-25 20:43:26.600928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.741 ms 00:25:18.669 [2024-11-25 20:43:26.600943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.601156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.601178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.669 [2024-11-25 20:43:26.601191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:25:18.669 [2024-11-25 20:43:26.601214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.601264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.601282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:18.669 [2024-11-25 20:43:26.601300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:18.669 [2024-11-25 20:43:26.601317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.669 [2024-11-25 20:43:26.601384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:18.669 [2024-11-25 20:43:26.607339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.669 [2024-11-25 20:43:26.607374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.669 [2024-11-25 20:43:26.607391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.961 ms 00:25:18.670 [2024-11-25 20:43:26.607402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.670 [2024-11-25 20:43:26.607489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.670 [2024-11-25 20:43:26.607521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:18.670 [2024-11-25 20:43:26.607538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:18.670 [2024-11-25 20:43:26.607548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.670 [2024-11-25 20:43:26.607600] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:18.670 [2024-11-25 20:43:26.607741] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:18.670 [2024-11-25 20:43:26.607763] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:18.670 [2024-11-25 20:43:26.607778] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:18.670 [2024-11-25 20:43:26.607795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:18.670 [2024-11-25 20:43:26.607807] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:18.670 [2024-11-25 20:43:26.607821] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:18.670 [2024-11-25 20:43:26.607835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:18.670 [2024-11-25 20:43:26.607848] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:18.670 [2024-11-25 20:43:26.607858] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:18.670 [2024-11-25 20:43:26.607873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.670 [2024-11-25 20:43:26.607883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:18.670 [2024-11-25 20:43:26.607899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:25:18.670 [2024-11-25 20:43:26.607909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.670 [2024-11-25 20:43:26.608009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.670 [2024-11-25 20:43:26.608020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:18.670 [2024-11-25 20:43:26.608034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:18.670 [2024-11-25 20:43:26.608045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.670 [2024-11-25 20:43:26.608181] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:18.670 [2024-11-25 20:43:26.608193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:18.670 [2024-11-25 20:43:26.608208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:18.670 [2024-11-25 20:43:26.608242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:18.670 [2024-11-25 20:43:26.608277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:18.670 [2024-11-25 20:43:26.608298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:18.670 [2024-11-25 20:43:26.608308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:18.670 [2024-11-25 20:43:26.608320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:18.670 [2024-11-25 20:43:26.608350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:18.670 [2024-11-25 20:43:26.608363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:18.670 [2024-11-25 20:43:26.608374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:18.670 [2024-11-25 20:43:26.608399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:18.670 [2024-11-25 20:43:26.608436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:18.670 [2024-11-25 20:43:26.608467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:18.670 [2024-11-25 20:43:26.608501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:18.670 [2024-11-25 20:43:26.608531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:18.670 [2024-11-25 20:43:26.608568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:18.670 [2024-11-25 20:43:26.608589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:18.670 [2024-11-25 20:43:26.608598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:18.670 [2024-11-25 20:43:26.608611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:18.670 [2024-11-25 20:43:26.608620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:18.670 [2024-11-25 20:43:26.608633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:18.670 [2024-11-25 20:43:26.608643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:18.670 [2024-11-25 20:43:26.608665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:18.670 [2024-11-25 20:43:26.608677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608686] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:18.670 [2024-11-25 20:43:26.608700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:18.670 [2024-11-25 20:43:26.608710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.670 [2024-11-25 20:43:26.608736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:18.670 [2024-11-25 20:43:26.608752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:18.670 [2024-11-25 20:43:26.608762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:18.670 [2024-11-25 20:43:26.608775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:18.670 [2024-11-25 20:43:26.608783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:18.670 [2024-11-25 20:43:26.608796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:18.670 [2024-11-25 20:43:26.608810] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:18.670 [2024-11-25 20:43:26.608833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.608845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:18.670 [2024-11-25 20:43:26.608859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:18.670 [2024-11-25 20:43:26.608870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:18.670 [2024-11-25 20:43:26.608884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:18.670 [2024-11-25 20:43:26.608894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:18.670 [2024-11-25 20:43:26.608908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:18.670 [2024-11-25 20:43:26.608918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:18.670 [2024-11-25 20:43:26.608932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:18.670 [2024-11-25 20:43:26.608943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:18.670 [2024-11-25 20:43:26.608959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.608969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.608984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.608994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.609007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:18.670 [2024-11-25 20:43:26.609017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:18.670 [2024-11-25 20:43:26.609032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.609044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:18.670 [2024-11-25 20:43:26.609058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:18.670 [2024-11-25 20:43:26.609068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:18.670 [2024-11-25 20:43:26.609082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:18.670 [2024-11-25 20:43:26.609093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.671 [2024-11-25 20:43:26.609107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:18.671 [2024-11-25 20:43:26.609118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:25:18.671 [2024-11-25 20:43:26.609131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.671 [2024-11-25 20:43:26.609241] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:18.671 [2024-11-25 20:43:26.609281] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:21.960 [2024-11-25 20:43:29.866299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.960 [2024-11-25 20:43:29.866402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:21.960 [2024-11-25 20:43:29.866424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3262.340 ms 00:25:21.960 [2024-11-25 20:43:29.866439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.960 [2024-11-25 20:43:29.913334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.960 [2024-11-25 20:43:29.913421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.960 [2024-11-25 20:43:29.913440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.519 ms 00:25:21.960 [2024-11-25 20:43:29.913455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.960 [2024-11-25 20:43:29.913705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.960 [2024-11-25 20:43:29.913724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:21.960 [2024-11-25 20:43:29.913761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:21.960 [2024-11-25 20:43:29.913785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.960 [2024-11-25 20:43:29.976239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.960 [2024-11-25 20:43:29.976354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.961 [2024-11-25 20:43:29.976373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.506 ms 00:25:21.961 [2024-11-25 20:43:29.976391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.961 [2024-11-25 20:43:29.976553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.961 [2024-11-25 20:43:29.976571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.961 [2024-11-25 20:43:29.976583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:21.961 [2024-11-25 20:43:29.976597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.961 [2024-11-25 20:43:29.977376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.961 [2024-11-25 20:43:29.977400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.961 [2024-11-25 20:43:29.977412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:25:21.961 [2024-11-25 20:43:29.977426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.961 [2024-11-25 20:43:29.977585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.961 [2024-11-25 20:43:29.977600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.961 [2024-11-25 20:43:29.977630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:25:21.961 [2024-11-25 20:43:29.977649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.961 [2024-11-25 20:43:30.004444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.961 [2024-11-25 20:43:30.004523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.961 [2024-11-25 20:43:30.004541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.795 ms 00:25:21.961 [2024-11-25 20:43:30.004556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.961 [2024-11-25 20:43:30.018973] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:21.961 [2024-11-25 20:43:30.045786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.961 [2024-11-25 20:43:30.045865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:21.961 [2024-11-25 20:43:30.045887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.082 ms 00:25:21.961 [2024-11-25 20:43:30.045899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 20:43:30.141607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 20:43:30.141692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:22.221 [2024-11-25 20:43:30.141716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.670 ms 00:25:22.221 [2024-11-25 20:43:30.141727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 20:43:30.142028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 20:43:30.142044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:22.221 [2024-11-25 20:43:30.142064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:25:22.221 [2024-11-25 20:43:30.142075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 20:43:30.179954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 20:43:30.180192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:22.221 [2024-11-25 20:43:30.180232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.885 ms 00:25:22.221 [2024-11-25 20:43:30.180244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 20:43:30.219208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 20:43:30.219442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:22.221 [2024-11-25 20:43:30.219479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.885 ms 00:25:22.221 [2024-11-25 20:43:30.219491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 20:43:30.220451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 20:43:30.220485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:22.221 [2024-11-25 20:43:30.220503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:25:22.221 [2024-11-25 20:43:30.220515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 20:43:30.325536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 20:43:30.325617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:22.221 [2024-11-25 20:43:30.325646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.128 ms 00:25:22.221 [2024-11-25 20:43:30.325659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.480 [2024-11-25 20:43:30.368805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.480 [2024-11-25 20:43:30.368886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:22.480 [2024-11-25 20:43:30.368910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.057 ms 00:25:22.480 [2024-11-25 20:43:30.368927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.480 [2024-11-25 20:43:30.409829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.480 [2024-11-25 20:43:30.409902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:22.480 [2024-11-25 20:43:30.409924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.845 ms 00:25:22.480 [2024-11-25 20:43:30.409935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.480 [2024-11-25 20:43:30.447785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.480 [2024-11-25 20:43:30.447869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:22.480 [2024-11-25 20:43:30.447891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.798 ms 00:25:22.480 [2024-11-25 20:43:30.447902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.480 [2024-11-25 20:43:30.448018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.480 [2024-11-25 20:43:30.448032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:22.481 [2024-11-25 20:43:30.448053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:22.481 [2024-11-25 20:43:30.448064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 20:43:30.448177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 20:43:30.448190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:22.481 [2024-11-25 20:43:30.448204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:22.481 [2024-11-25 20:43:30.448214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 20:43:30.449647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:22.481 [2024-11-25 20:43:30.454291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3877.185 ms, result 0 00:25:22.481 [2024-11-25 20:43:30.455394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.481 { 00:25:22.481 "name": "ftl0", 00:25:22.481 "uuid": "1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6" 00:25:22.481 } 00:25:22.481 20:43:30 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:22.481 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:22.481 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:22.481 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:22.481 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:22.481 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:22.481 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:22.740 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:22.999 [ 00:25:22.999 { 00:25:22.999 "name": "ftl0", 00:25:22.999 "aliases": [ 00:25:22.999 "1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6" 00:25:22.999 ], 00:25:22.999 "product_name": "FTL disk", 00:25:22.999 "block_size": 4096, 00:25:22.999 "num_blocks": 23592960, 00:25:22.999 "uuid": "1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6", 00:25:22.999 "assigned_rate_limits": { 00:25:22.999 "rw_ios_per_sec": 0, 00:25:22.999 "rw_mbytes_per_sec": 0, 00:25:22.999 "r_mbytes_per_sec": 0, 00:25:22.999 "w_mbytes_per_sec": 0 00:25:22.999 }, 00:25:22.999 "claimed": false, 00:25:22.999 "zoned": false, 00:25:22.999 "supported_io_types": { 00:25:22.999 "read": true, 00:25:22.999 "write": true, 00:25:22.999 "unmap": true, 00:25:22.999 "flush": true, 00:25:22.999 "reset": false, 00:25:22.999 "nvme_admin": false, 00:25:22.999 "nvme_io": false, 00:25:22.999 "nvme_io_md": false, 00:25:22.999 "write_zeroes": true, 00:25:22.999 "zcopy": false, 00:25:22.999 "get_zone_info": false, 00:25:22.999 "zone_management": false, 00:25:22.999 "zone_append": false, 00:25:22.999 "compare": false, 00:25:22.999 "compare_and_write": false, 00:25:22.999 "abort": false, 00:25:22.999 "seek_hole": false, 00:25:22.999 "seek_data": false, 00:25:22.999 "copy": false, 00:25:22.999 "nvme_iov_md": false 00:25:22.999 }, 00:25:22.999 "driver_specific": { 00:25:22.999 "ftl": { 00:25:22.999 "base_bdev": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:22.999 "cache": "nvc0n1p0" 00:25:22.999 } 00:25:22.999 } 00:25:22.999 } 00:25:22.999 ] 00:25:22.999 20:43:30 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:22.999 20:43:30 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:22.999 20:43:30 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:23.258 20:43:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:23.258 20:43:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:23.258 20:43:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:23.258 { 00:25:23.258 "name": "ftl0", 00:25:23.258 "aliases": [ 00:25:23.258 "1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6" 00:25:23.258 ], 00:25:23.258 "product_name": "FTL disk", 00:25:23.258 "block_size": 4096, 00:25:23.258 "num_blocks": 23592960, 00:25:23.258 "uuid": "1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6", 00:25:23.258 "assigned_rate_limits": { 00:25:23.258 "rw_ios_per_sec": 0, 00:25:23.258 "rw_mbytes_per_sec": 0, 00:25:23.258 "r_mbytes_per_sec": 0, 00:25:23.258 "w_mbytes_per_sec": 0 00:25:23.258 }, 00:25:23.258 "claimed": false, 00:25:23.258 "zoned": false, 00:25:23.258 "supported_io_types": { 00:25:23.258 "read": true, 00:25:23.258 "write": true, 00:25:23.258 "unmap": true, 00:25:23.258 "flush": true, 00:25:23.258 "reset": false, 00:25:23.258 "nvme_admin": false, 00:25:23.258 "nvme_io": false, 00:25:23.258 "nvme_io_md": false, 00:25:23.258 "write_zeroes": true, 00:25:23.258 "zcopy": false, 00:25:23.258 "get_zone_info": false, 00:25:23.258 "zone_management": false, 00:25:23.258 "zone_append": false, 00:25:23.258 "compare": false, 00:25:23.258 "compare_and_write": false, 00:25:23.258 "abort": false, 00:25:23.258 "seek_hole": false, 00:25:23.258 "seek_data": false, 00:25:23.258 "copy": false, 00:25:23.258 "nvme_iov_md": false 00:25:23.258 }, 00:25:23.258 "driver_specific": { 00:25:23.258 "ftl": { 00:25:23.258 "base_bdev": "2d19a822-89ee-4773-a22a-13c66caed4da", 00:25:23.258 "cache": "nvc0n1p0" 00:25:23.258 } 00:25:23.258 } 00:25:23.258 } 00:25:23.258 ]' 00:25:23.258 20:43:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:23.517 20:43:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:23.517 20:43:31 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:23.517 [2024-11-25 20:43:31.612162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.517 [2024-11-25 20:43:31.612243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:23.517 [2024-11-25 20:43:31.612268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.517 [2024-11-25 20:43:31.612283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.517 [2024-11-25 20:43:31.612347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:23.517 [2024-11-25 20:43:31.617174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.517 [2024-11-25 20:43:31.617209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:23.517 [2024-11-25 20:43:31.617235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.806 ms 00:25:23.517 [2024-11-25 20:43:31.617246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.517 [2024-11-25 20:43:31.617941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.517 [2024-11-25 20:43:31.617969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:23.517 [2024-11-25 20:43:31.617984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:25:23.517 [2024-11-25 20:43:31.617995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.517 [2024-11-25 20:43:31.620835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.517 [2024-11-25 20:43:31.620858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:23.517 [2024-11-25 20:43:31.620873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:25:23.517 [2024-11-25 20:43:31.620884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.517 [2024-11-25 20:43:31.626569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.517 [2024-11-25 20:43:31.626738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:23.517 [2024-11-25 20:43:31.626769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.629 ms 00:25:23.517 [2024-11-25 20:43:31.626780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.666413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.666491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:23.778 [2024-11-25 20:43:31.666518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.576 ms 00:25:23.778 [2024-11-25 20:43:31.666530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.690033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.690275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.778 [2024-11-25 20:43:31.690315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.382 ms 00:25:23.778 [2024-11-25 20:43:31.690339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.690611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.690627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.778 [2024-11-25 20:43:31.690642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:25:23.778 [2024-11-25 20:43:31.690654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.728400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.728587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:23.778 [2024-11-25 20:43:31.728618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.762 ms 00:25:23.778 [2024-11-25 20:43:31.728629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.766321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.766380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:23.778 [2024-11-25 20:43:31.766404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.651 ms 00:25:23.778 [2024-11-25 20:43:31.766415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.804080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.804292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:23.778 [2024-11-25 20:43:31.804322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.595 ms 00:25:23.778 [2024-11-25 20:43:31.804345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.841401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.778 [2024-11-25 20:43:31.841453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:23.778 [2024-11-25 20:43:31.841472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.940 ms 00:25:23.778 [2024-11-25 20:43:31.841483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.778 [2024-11-25 20:43:31.841590] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:23.778 [2024-11-25 20:43:31.841610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.841994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:23.778 [2024-11-25 20:43:31.842103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:23.779 [2024-11-25 20:43:31.842998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:23.779 [2024-11-25 20:43:31.843015] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:25:23.779 [2024-11-25 20:43:31.843026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:23.779 [2024-11-25 20:43:31.843040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:23.779 [2024-11-25 20:43:31.843055] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:23.779 [2024-11-25 20:43:31.843069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:23.779 [2024-11-25 20:43:31.843079] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:23.779 [2024-11-25 20:43:31.843093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:23.779 [2024-11-25 20:43:31.843104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:23.779 [2024-11-25 20:43:31.843116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:23.779 [2024-11-25 20:43:31.843126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:23.779 [2024-11-25 20:43:31.843139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.779 [2024-11-25 20:43:31.843150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:23.779 [2024-11-25 20:43:31.843165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:25:23.779 [2024-11-25 20:43:31.843175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.779 [2024-11-25 20:43:31.865298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.779 [2024-11-25 20:43:31.865358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:23.779 [2024-11-25 20:43:31.865379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.112 ms 00:25:23.779 [2024-11-25 20:43:31.865391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.779 [2024-11-25 20:43:31.866106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.779 [2024-11-25 20:43:31.866124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:23.779 [2024-11-25 20:43:31.866138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:25:23.779 [2024-11-25 20:43:31.866149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.039 [2024-11-25 20:43:31.942520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.039 [2024-11-25 20:43:31.942604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.039 [2024-11-25 20:43:31.942625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.039 [2024-11-25 20:43:31.942639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.039 [2024-11-25 20:43:31.942864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.039 [2024-11-25 20:43:31.942878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.039 [2024-11-25 20:43:31.942893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.039 [2024-11-25 20:43:31.942904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.039 [2024-11-25 20:43:31.943004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.039 [2024-11-25 20:43:31.943018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.039 [2024-11-25 20:43:31.943036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.039 [2024-11-25 20:43:31.943047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.039 [2024-11-25 20:43:31.943092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.039 [2024-11-25 20:43:31.943103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.039 [2024-11-25 20:43:31.943117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.039 [2024-11-25 20:43:31.943128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.039 [2024-11-25 20:43:32.091762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.039 [2024-11-25 20:43:32.091849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.039 [2024-11-25 20:43:32.091871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.039 [2024-11-25 20:43:32.091884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.205134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.205461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.299 [2024-11-25 20:43:32.205495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.205507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.205710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.205724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.299 [2024-11-25 20:43:32.205748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.205760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.205831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.205842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.299 [2024-11-25 20:43:32.205857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.205868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.206026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.206042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.299 [2024-11-25 20:43:32.206060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.206071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.206148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.206162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:24.299 [2024-11-25 20:43:32.206176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.206186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.206259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.206271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.299 [2024-11-25 20:43:32.206289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.206302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.206393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.299 [2024-11-25 20:43:32.206407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.299 [2024-11-25 20:43:32.206421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.299 [2024-11-25 20:43:32.206432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.299 [2024-11-25 20:43:32.206673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.466 ms, result 0 00:25:24.299 true 00:25:24.299 20:43:32 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78516 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78516 ']' 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78516 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78516 00:25:24.299 killing process with pid 78516 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78516' 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78516 00:25:24.299 20:43:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78516 00:25:29.567 20:43:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:30.504 65536+0 records in 00:25:30.504 65536+0 records out 00:25:30.504 268435456 bytes (268 MB, 256 MiB) copied, 1.05642 s, 254 MB/s 00:25:30.504 20:43:38 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.504 [2024-11-25 20:43:38.619660] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:25:30.504 [2024-11-25 20:43:38.619815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78732 ] 00:25:30.762 [2024-11-25 20:43:38.804874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.022 [2024-11-25 20:43:38.951270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.280 [2024-11-25 20:43:39.359379] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.280 [2024-11-25 20:43:39.359706] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.540 [2024-11-25 20:43:39.527191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.527521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.540 [2024-11-25 20:43:39.527552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:31.540 [2024-11-25 20:43:39.527564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.531072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.531237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.540 [2024-11-25 20:43:39.531259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:25:31.540 [2024-11-25 20:43:39.531271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.531394] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.540 [2024-11-25 20:43:39.532456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.540 [2024-11-25 20:43:39.532492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.532504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.540 [2024-11-25 20:43:39.532516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:25:31.540 [2024-11-25 20:43:39.532526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.535075] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:31.540 [2024-11-25 20:43:39.555801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.555853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:31.540 [2024-11-25 20:43:39.555870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.759 ms 00:25:31.540 [2024-11-25 20:43:39.555882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.556018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.556033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:31.540 [2024-11-25 20:43:39.556046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:31.540 [2024-11-25 20:43:39.556057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.569357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.569396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.540 [2024-11-25 20:43:39.569411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.268 ms 00:25:31.540 [2024-11-25 20:43:39.569438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.569604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.569621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.540 [2024-11-25 20:43:39.569634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:31.540 [2024-11-25 20:43:39.569645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.569685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.569697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.540 [2024-11-25 20:43:39.569709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:31.540 [2024-11-25 20:43:39.569719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.569749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:31.540 [2024-11-25 20:43:39.575626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.575800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.540 [2024-11-25 20:43:39.575823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.897 ms 00:25:31.540 [2024-11-25 20:43:39.575835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.575904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.575917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.540 [2024-11-25 20:43:39.575929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:31.540 [2024-11-25 20:43:39.575945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.575968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:31.540 [2024-11-25 20:43:39.575994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:31.540 [2024-11-25 20:43:39.576033] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:31.540 [2024-11-25 20:43:39.576052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:31.540 [2024-11-25 20:43:39.576147] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:31.540 [2024-11-25 20:43:39.576162] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.540 [2024-11-25 20:43:39.576180] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:31.540 [2024-11-25 20:43:39.576193] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.540 [2024-11-25 20:43:39.576206] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.540 [2024-11-25 20:43:39.576219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:31.540 [2024-11-25 20:43:39.576230] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.540 [2024-11-25 20:43:39.576241] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:31.540 [2024-11-25 20:43:39.576252] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:31.540 [2024-11-25 20:43:39.576263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.576273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.540 [2024-11-25 20:43:39.576284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:25:31.540 [2024-11-25 20:43:39.576295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.576396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.540 [2024-11-25 20:43:39.576409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.540 [2024-11-25 20:43:39.576420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:31.540 [2024-11-25 20:43:39.576431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.540 [2024-11-25 20:43:39.576530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.540 [2024-11-25 20:43:39.576543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.540 [2024-11-25 20:43:39.576555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.540 [2024-11-25 20:43:39.576565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.540 [2024-11-25 20:43:39.576577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.540 [2024-11-25 20:43:39.576586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.540 [2024-11-25 20:43:39.576596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:31.540 [2024-11-25 20:43:39.576605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.540 [2024-11-25 20:43:39.576617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.541 [2024-11-25 20:43:39.576639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.541 [2024-11-25 20:43:39.576663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:31.541 [2024-11-25 20:43:39.576673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.541 [2024-11-25 20:43:39.576683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.541 [2024-11-25 20:43:39.576694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:31.541 [2024-11-25 20:43:39.576703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.541 [2024-11-25 20:43:39.576723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:31.541 [2024-11-25 20:43:39.576733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.541 [2024-11-25 20:43:39.576753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.541 [2024-11-25 20:43:39.576771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.541 [2024-11-25 20:43:39.576780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.541 [2024-11-25 20:43:39.576799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.541 [2024-11-25 20:43:39.576809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.541 [2024-11-25 20:43:39.576827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.541 [2024-11-25 20:43:39.576836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.541 [2024-11-25 20:43:39.576854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.541 [2024-11-25 20:43:39.576863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.541 [2024-11-25 20:43:39.576881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.541 [2024-11-25 20:43:39.576890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:31.541 [2024-11-25 20:43:39.576899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.541 [2024-11-25 20:43:39.576908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:31.541 [2024-11-25 20:43:39.576917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:31.541 [2024-11-25 20:43:39.576926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:31.541 [2024-11-25 20:43:39.576945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:31.541 [2024-11-25 20:43:39.576954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.541 [2024-11-25 20:43:39.576964] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.541 [2024-11-25 20:43:39.576977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.541 [2024-11-25 20:43:39.576987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.541 [2024-11-25 20:43:39.576998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.541 [2024-11-25 20:43:39.577008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.541 [2024-11-25 20:43:39.577018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.541 [2024-11-25 20:43:39.577028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.541 [2024-11-25 20:43:39.577037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.541 [2024-11-25 20:43:39.577046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.541 [2024-11-25 20:43:39.577056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.541 [2024-11-25 20:43:39.577067] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.541 [2024-11-25 20:43:39.577080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:31.541 [2024-11-25 20:43:39.577104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:31.541 [2024-11-25 20:43:39.577115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:31.541 [2024-11-25 20:43:39.577125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:31.541 [2024-11-25 20:43:39.577136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:31.541 [2024-11-25 20:43:39.577146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:31.541 [2024-11-25 20:43:39.577156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:31.541 [2024-11-25 20:43:39.577167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:31.541 [2024-11-25 20:43:39.577177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:31.541 [2024-11-25 20:43:39.577188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:31.541 [2024-11-25 20:43:39.577240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.541 [2024-11-25 20:43:39.577252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.541 [2024-11-25 20:43:39.577281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.541 [2024-11-25 20:43:39.577291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.541 [2024-11-25 20:43:39.577303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.541 [2024-11-25 20:43:39.577314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.541 [2024-11-25 20:43:39.577337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.541 [2024-11-25 20:43:39.577348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:25:31.541 [2024-11-25 20:43:39.577359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.541 [2024-11-25 20:43:39.628401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.541 [2024-11-25 20:43:39.628654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.541 [2024-11-25 20:43:39.628684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.054 ms 00:25:31.541 [2024-11-25 20:43:39.628703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.541 [2024-11-25 20:43:39.628928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.541 [2024-11-25 20:43:39.628942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:31.541 [2024-11-25 20:43:39.628954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:31.541 [2024-11-25 20:43:39.628966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.701347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.701418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.801 [2024-11-25 20:43:39.701437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.468 ms 00:25:31.801 [2024-11-25 20:43:39.701449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.701611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.701626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.801 [2024-11-25 20:43:39.701639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:31.801 [2024-11-25 20:43:39.701650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.702445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.702461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.801 [2024-11-25 20:43:39.702481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:25:31.801 [2024-11-25 20:43:39.702492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.702641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.702656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.801 [2024-11-25 20:43:39.702668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:31.801 [2024-11-25 20:43:39.702678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.727288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.727361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.801 [2024-11-25 20:43:39.727380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.621 ms 00:25:31.801 [2024-11-25 20:43:39.727406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.747827] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:31.801 [2024-11-25 20:43:39.747911] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:31.801 [2024-11-25 20:43:39.747931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.747943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:31.801 [2024-11-25 20:43:39.747958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.368 ms 00:25:31.801 [2024-11-25 20:43:39.747969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.779172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.779390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:31.801 [2024-11-25 20:43:39.779421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.139 ms 00:25:31.801 [2024-11-25 20:43:39.779433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.798942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.799107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:31.801 [2024-11-25 20:43:39.799133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.413 ms 00:25:31.801 [2024-11-25 20:43:39.799145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.817597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.817748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:31.801 [2024-11-25 20:43:39.817772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.378 ms 00:25:31.801 [2024-11-25 20:43:39.817784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.818712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.818746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:31.801 [2024-11-25 20:43:39.818760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:25:31.801 [2024-11-25 20:43:39.818772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.801 [2024-11-25 20:43:39.922483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.801 [2024-11-25 20:43:39.922580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:31.801 [2024-11-25 20:43:39.922600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.842 ms 00:25:31.801 [2024-11-25 20:43:39.922612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.068 [2024-11-25 20:43:39.937044] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:32.068 [2024-11-25 20:43:39.965189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.068 [2024-11-25 20:43:39.965276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.068 [2024-11-25 20:43:39.965296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.446 ms 00:25:32.068 [2024-11-25 20:43:39.965334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.068 [2024-11-25 20:43:39.965549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.068 [2024-11-25 20:43:39.965565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.068 [2024-11-25 20:43:39.965577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:32.068 [2024-11-25 20:43:39.965596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.068 [2024-11-25 20:43:39.965675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.068 [2024-11-25 20:43:39.965688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.068 [2024-11-25 20:43:39.965699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:32.068 [2024-11-25 20:43:39.965716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.068 [2024-11-25 20:43:39.965751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.069 [2024-11-25 20:43:39.965763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.069 [2024-11-25 20:43:39.965774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:32.069 [2024-11-25 20:43:39.965784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.069 [2024-11-25 20:43:39.965825] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.069 [2024-11-25 20:43:39.965851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.069 [2024-11-25 20:43:39.965862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.069 [2024-11-25 20:43:39.965873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:32.069 [2024-11-25 20:43:39.965883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.069 [2024-11-25 20:43:40.006020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.069 [2024-11-25 20:43:40.006107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.069 [2024-11-25 20:43:40.006125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.168 ms 00:25:32.069 [2024-11-25 20:43:40.006139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.069 [2024-11-25 20:43:40.006353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.069 [2024-11-25 20:43:40.006371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.069 [2024-11-25 20:43:40.006385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:32.069 [2024-11-25 20:43:40.006396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.069 [2024-11-25 20:43:40.007822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.069 [2024-11-25 20:43:40.013233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 481.040 ms, result 0 00:25:32.069 [2024-11-25 20:43:40.014320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.069 [2024-11-25 20:43:40.033214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.051  [2024-11-25T20:43:42.121Z] Copying: 24/256 [MB] (24 MBps) [2024-11-25T20:43:43.057Z] Copying: 48/256 [MB] (24 MBps) [2024-11-25T20:43:44.435Z] Copying: 72/256 [MB] (23 MBps) [2024-11-25T20:43:45.372Z] Copying: 96/256 [MB] (23 MBps) [2024-11-25T20:43:46.308Z] Copying: 119/256 [MB] (23 MBps) [2024-11-25T20:43:47.246Z] Copying: 144/256 [MB] (24 MBps) [2024-11-25T20:43:48.184Z] Copying: 168/256 [MB] (24 MBps) [2024-11-25T20:43:49.122Z] Copying: 192/256 [MB] (23 MBps) [2024-11-25T20:43:50.063Z] Copying: 216/256 [MB] (24 MBps) [2024-11-25T20:43:51.003Z] Copying: 239/256 [MB] (23 MBps) [2024-11-25T20:43:51.003Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-25 20:43:50.688310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.867 [2024-11-25 20:43:50.704411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.704469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.867 [2024-11-25 20:43:50.704489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:42.867 [2024-11-25 20:43:50.704511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.704540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:42.867 [2024-11-25 20:43:50.709084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.709115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.867 [2024-11-25 20:43:50.709129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.532 ms 00:25:42.867 [2024-11-25 20:43:50.709156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.711307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.711357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.867 [2024-11-25 20:43:50.711372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:25:42.867 [2024-11-25 20:43:50.711383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.718947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.719007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.867 [2024-11-25 20:43:50.719019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.555 ms 00:25:42.867 [2024-11-25 20:43:50.719046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.724750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.724787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.867 [2024-11-25 20:43:50.724800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.673 ms 00:25:42.867 [2024-11-25 20:43:50.724827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.764528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.764842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.867 [2024-11-25 20:43:50.764871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.685 ms 00:25:42.867 [2024-11-25 20:43:50.764882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.788629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.788707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.867 [2024-11-25 20:43:50.788757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.690 ms 00:25:42.867 [2024-11-25 20:43:50.788768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.788971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.788986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.867 [2024-11-25 20:43:50.788999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:42.867 [2024-11-25 20:43:50.789027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.830401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.830486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:42.867 [2024-11-25 20:43:50.830507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.414 ms 00:25:42.867 [2024-11-25 20:43:50.830519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.871453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.871544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:42.867 [2024-11-25 20:43:50.871562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.883 ms 00:25:42.867 [2024-11-25 20:43:50.871590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.911981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.912063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.867 [2024-11-25 20:43:50.912080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.334 ms 00:25:42.867 [2024-11-25 20:43:50.912107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.952603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.867 [2024-11-25 20:43:50.952942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.867 [2024-11-25 20:43:50.952971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.411 ms 00:25:42.867 [2024-11-25 20:43:50.952984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.867 [2024-11-25 20:43:50.953100] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.867 [2024-11-25 20:43:50.953123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:42.867 [2024-11-25 20:43:50.953138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:42.867 [2024-11-25 20:43:50.953150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.953990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.868 [2024-11-25 20:43:50.954199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.869 [2024-11-25 20:43:50.954361] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.869 [2024-11-25 20:43:50.954372] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:25:42.869 [2024-11-25 20:43:50.954385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:42.869 [2024-11-25 20:43:50.954395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:42.869 [2024-11-25 20:43:50.954406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:42.869 [2024-11-25 20:43:50.954417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:42.869 [2024-11-25 20:43:50.954428] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.869 [2024-11-25 20:43:50.954439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.869 [2024-11-25 20:43:50.954449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.869 [2024-11-25 20:43:50.954459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.869 [2024-11-25 20:43:50.954469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.869 [2024-11-25 20:43:50.954480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.869 [2024-11-25 20:43:50.954498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.869 [2024-11-25 20:43:50.954509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:25:42.869 [2024-11-25 20:43:50.954520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.869 [2024-11-25 20:43:50.976161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.869 [2024-11-25 20:43:50.976474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.869 [2024-11-25 20:43:50.976503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.643 ms 00:25:42.869 [2024-11-25 20:43:50.976515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.869 [2024-11-25 20:43:50.977182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.869 [2024-11-25 20:43:50.977199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.869 [2024-11-25 20:43:50.977211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:25:42.869 [2024-11-25 20:43:50.977222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.128 [2024-11-25 20:43:51.037086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.128 [2024-11-25 20:43:51.037433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.129 [2024-11-25 20:43:51.037464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.129 [2024-11-25 20:43:51.037477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.129 [2024-11-25 20:43:51.037648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.129 [2024-11-25 20:43:51.037662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.129 [2024-11-25 20:43:51.037673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.129 [2024-11-25 20:43:51.037685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.129 [2024-11-25 20:43:51.037762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.129 [2024-11-25 20:43:51.037776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.129 [2024-11-25 20:43:51.037787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.129 [2024-11-25 20:43:51.037799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.129 [2024-11-25 20:43:51.037821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.129 [2024-11-25 20:43:51.037838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.129 [2024-11-25 20:43:51.037850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.129 [2024-11-25 20:43:51.037861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.129 [2024-11-25 20:43:51.176294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.129 [2024-11-25 20:43:51.176659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.129 [2024-11-25 20:43:51.176688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.129 [2024-11-25 20:43:51.176701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.288428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.288515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.389 [2024-11-25 20:43:51.288532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.288561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.288697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.288711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.389 [2024-11-25 20:43:51.288723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.288734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.288769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.288782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.389 [2024-11-25 20:43:51.288797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.288808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.288944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.288960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.389 [2024-11-25 20:43:51.288971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.288982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.289025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.289039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:43.389 [2024-11-25 20:43:51.289056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.289068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.289117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.289129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.389 [2024-11-25 20:43:51.289140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.289150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.289205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.389 [2024-11-25 20:43:51.289218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.389 [2024-11-25 20:43:51.289232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.389 [2024-11-25 20:43:51.289243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.389 [2024-11-25 20:43:51.289437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 585.963 ms, result 0 00:25:44.770 00:25:44.770 00:25:44.770 20:43:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78870 00:25:44.770 20:43:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:44.770 20:43:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78870 00:25:44.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.770 20:43:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78870 ']' 00:25:44.770 20:43:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.770 20:43:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.770 20:43:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.770 20:43:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.770 20:43:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:44.770 [2024-11-25 20:43:52.704750] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:25:44.770 [2024-11-25 20:43:52.704882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78870 ] 00:25:44.770 [2024-11-25 20:43:52.888788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.030 [2024-11-25 20:43:53.037353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.002 20:43:54 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.002 20:43:54 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:46.002 20:43:54 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:46.262 [2024-11-25 20:43:54.278707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.262 [2024-11-25 20:43:54.278792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:46.522 [2024-11-25 20:43:54.462878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.462957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:46.522 [2024-11-25 20:43:54.462979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:46.522 [2024-11-25 20:43:54.462990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.466537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.466582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.522 [2024-11-25 20:43:54.466598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.528 ms 00:25:46.522 [2024-11-25 20:43:54.466609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.466724] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:46.522 [2024-11-25 20:43:54.467751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:46.522 [2024-11-25 20:43:54.467787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.467799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.522 [2024-11-25 20:43:54.467814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:25:46.522 [2024-11-25 20:43:54.467827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.470368] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:46.522 [2024-11-25 20:43:54.490902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.490956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:46.522 [2024-11-25 20:43:54.490972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.572 ms 00:25:46.522 [2024-11-25 20:43:54.490987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.491111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.491129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:46.522 [2024-11-25 20:43:54.491142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:46.522 [2024-11-25 20:43:54.491155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.504038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.504094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.522 [2024-11-25 20:43:54.504109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.846 ms 00:25:46.522 [2024-11-25 20:43:54.504123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.504282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.504301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.522 [2024-11-25 20:43:54.504314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:46.522 [2024-11-25 20:43:54.504352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.522 [2024-11-25 20:43:54.504390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.522 [2024-11-25 20:43:54.504405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:46.522 [2024-11-25 20:43:54.504416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:46.522 [2024-11-25 20:43:54.504429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.523 [2024-11-25 20:43:54.504462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:46.523 [2024-11-25 20:43:54.510490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.523 [2024-11-25 20:43:54.510671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.523 [2024-11-25 20:43:54.510701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.044 ms 00:25:46.523 [2024-11-25 20:43:54.510713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.523 [2024-11-25 20:43:54.510788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.523 [2024-11-25 20:43:54.510800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:46.523 [2024-11-25 20:43:54.510816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:46.523 [2024-11-25 20:43:54.510830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.523 [2024-11-25 20:43:54.510859] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:46.523 [2024-11-25 20:43:54.510884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:46.523 [2024-11-25 20:43:54.510938] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:46.523 [2024-11-25 20:43:54.510959] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:46.523 [2024-11-25 20:43:54.511058] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:46.523 [2024-11-25 20:43:54.511072] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:46.523 [2024-11-25 20:43:54.511096] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:46.523 [2024-11-25 20:43:54.511110] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511138] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:46.523 [2024-11-25 20:43:54.511152] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:46.523 [2024-11-25 20:43:54.511162] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:46.523 [2024-11-25 20:43:54.511179] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:46.523 [2024-11-25 20:43:54.511191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.523 [2024-11-25 20:43:54.511205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:46.523 [2024-11-25 20:43:54.511216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:25:46.523 [2024-11-25 20:43:54.511230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.523 [2024-11-25 20:43:54.511310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.523 [2024-11-25 20:43:54.511337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:46.523 [2024-11-25 20:43:54.511349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:46.523 [2024-11-25 20:43:54.511363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.523 [2024-11-25 20:43:54.511468] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:46.523 [2024-11-25 20:43:54.511485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:46.523 [2024-11-25 20:43:54.511496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:46.523 [2024-11-25 20:43:54.511533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:46.523 [2024-11-25 20:43:54.511571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.523 [2024-11-25 20:43:54.511593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:46.523 [2024-11-25 20:43:54.511606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:46.523 [2024-11-25 20:43:54.511621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:46.523 [2024-11-25 20:43:54.511634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:46.523 [2024-11-25 20:43:54.511643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:46.523 [2024-11-25 20:43:54.511657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:46.523 [2024-11-25 20:43:54.511680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:46.523 [2024-11-25 20:43:54.511722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:46.523 [2024-11-25 20:43:54.511758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:46.523 [2024-11-25 20:43:54.511788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:46.523 [2024-11-25 20:43:54.511822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:46.523 [2024-11-25 20:43:54.511845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:46.523 [2024-11-25 20:43:54.511854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.523 [2024-11-25 20:43:54.511875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:46.523 [2024-11-25 20:43:54.511902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:46.523 [2024-11-25 20:43:54.511911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:46.523 [2024-11-25 20:43:54.511923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:46.523 [2024-11-25 20:43:54.511933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:46.523 [2024-11-25 20:43:54.511947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:46.523 [2024-11-25 20:43:54.511969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:46.523 [2024-11-25 20:43:54.511978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.511990] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:46.523 [2024-11-25 20:43:54.512003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:46.523 [2024-11-25 20:43:54.512016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:46.523 [2024-11-25 20:43:54.512026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:46.523 [2024-11-25 20:43:54.512040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:46.523 [2024-11-25 20:43:54.512050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:46.523 [2024-11-25 20:43:54.512062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:46.523 [2024-11-25 20:43:54.512072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:46.523 [2024-11-25 20:43:54.512084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:46.523 [2024-11-25 20:43:54.512094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:46.523 [2024-11-25 20:43:54.512108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:46.523 [2024-11-25 20:43:54.512121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:46.523 [2024-11-25 20:43:54.512151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:46.523 [2024-11-25 20:43:54.512165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:46.523 [2024-11-25 20:43:54.512176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:46.523 [2024-11-25 20:43:54.512189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:46.523 [2024-11-25 20:43:54.512200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:46.523 [2024-11-25 20:43:54.512213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:46.523 [2024-11-25 20:43:54.512224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:46.523 [2024-11-25 20:43:54.512237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:46.523 [2024-11-25 20:43:54.512248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:46.523 [2024-11-25 20:43:54.512309] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:46.523 [2024-11-25 20:43:54.512320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:46.523 [2024-11-25 20:43:54.512360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:46.523 [2024-11-25 20:43:54.512374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:46.523 [2024-11-25 20:43:54.512385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:46.523 [2024-11-25 20:43:54.512399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.523 [2024-11-25 20:43:54.512410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:46.523 [2024-11-25 20:43:54.512423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:25:46.523 [2024-11-25 20:43:54.512437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.523 [2024-11-25 20:43:54.562223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.562570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.524 [2024-11-25 20:43:54.562608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.787 ms 00:25:46.524 [2024-11-25 20:43:54.562624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.524 [2024-11-25 20:43:54.562863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.562877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:46.524 [2024-11-25 20:43:54.562892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:46.524 [2024-11-25 20:43:54.562903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.524 [2024-11-25 20:43:54.618113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.618454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.524 [2024-11-25 20:43:54.618562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.259 ms 00:25:46.524 [2024-11-25 20:43:54.618601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.524 [2024-11-25 20:43:54.618781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.618940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.524 [2024-11-25 20:43:54.618959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.524 [2024-11-25 20:43:54.618970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.524 [2024-11-25 20:43:54.619759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.619786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.524 [2024-11-25 20:43:54.619801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:25:46.524 [2024-11-25 20:43:54.619812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.524 [2024-11-25 20:43:54.619962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.619977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.524 [2024-11-25 20:43:54.619991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:25:46.524 [2024-11-25 20:43:54.620002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.524 [2024-11-25 20:43:54.645799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.524 [2024-11-25 20:43:54.646085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.524 [2024-11-25 20:43:54.646121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.800 ms 00:25:46.524 [2024-11-25 20:43:54.646134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.666588] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:46.783 [2024-11-25 20:43:54.666640] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:46.783 [2024-11-25 20:43:54.666662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.666675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:46.783 [2024-11-25 20:43:54.666693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.335 ms 00:25:46.783 [2024-11-25 20:43:54.666716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.698494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.698555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:46.783 [2024-11-25 20:43:54.698576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.688 ms 00:25:46.783 [2024-11-25 20:43:54.698587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.719060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.719123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:46.783 [2024-11-25 20:43:54.719163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.383 ms 00:25:46.783 [2024-11-25 20:43:54.719174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.738518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.738580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:46.783 [2024-11-25 20:43:54.738600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.254 ms 00:25:46.783 [2024-11-25 20:43:54.738610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.739502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.739528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:46.783 [2024-11-25 20:43:54.739544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:25:46.783 [2024-11-25 20:43:54.739555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.843695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.843788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:46.783 [2024-11-25 20:43:54.843828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.270 ms 00:25:46.783 [2024-11-25 20:43:54.843841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.856862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:46.783 [2024-11-25 20:43:54.883135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.883239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:46.783 [2024-11-25 20:43:54.883280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.179 ms 00:25:46.783 [2024-11-25 20:43:54.883295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.883491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.883510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:46.783 [2024-11-25 20:43:54.883523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:46.783 [2024-11-25 20:43:54.883537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.883613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.883630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:46.783 [2024-11-25 20:43:54.883642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:46.783 [2024-11-25 20:43:54.883670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.883701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.883719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:46.783 [2024-11-25 20:43:54.883731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:46.783 [2024-11-25 20:43:54.883747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.783 [2024-11-25 20:43:54.883798] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:46.783 [2024-11-25 20:43:54.883823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.783 [2024-11-25 20:43:54.883841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:46.783 [2024-11-25 20:43:54.883857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:46.783 [2024-11-25 20:43:54.883868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.042 [2024-11-25 20:43:54.922676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.042 [2024-11-25 20:43:54.922749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:47.042 [2024-11-25 20:43:54.922775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.823 ms 00:25:47.042 [2024-11-25 20:43:54.922788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.042 [2024-11-25 20:43:54.922942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.042 [2024-11-25 20:43:54.922956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:47.042 [2024-11-25 20:43:54.922977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:47.042 [2024-11-25 20:43:54.922988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.042 [2024-11-25 20:43:54.924417] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.042 [2024-11-25 20:43:54.929708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 461.907 ms, result 0 00:25:47.042 [2024-11-25 20:43:54.931287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:47.042 Some configs were skipped because the RPC state that can call them passed over. 00:25:47.042 20:43:54 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:47.301 [2024-11-25 20:43:55.187378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.301 [2024-11-25 20:43:55.187674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:47.301 [2024-11-25 20:43:55.187789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:25:47.301 [2024-11-25 20:43:55.187838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.301 [2024-11-25 20:43:55.187923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.223 ms, result 0 00:25:47.301 true 00:25:47.301 20:43:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:47.301 [2024-11-25 20:43:55.406980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.301 [2024-11-25 20:43:55.407211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:47.301 [2024-11-25 20:43:55.407245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:25:47.301 [2024-11-25 20:43:55.407256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.301 [2024-11-25 20:43:55.407320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.739 ms, result 0 00:25:47.301 true 00:25:47.560 20:43:55 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78870 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78870 ']' 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78870 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78870 00:25:47.560 killing process with pid 78870 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78870' 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78870 00:25:47.560 20:43:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78870 00:25:48.940 [2024-11-25 20:43:56.739861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.740164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:48.940 [2024-11-25 20:43:56.740192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:48.940 [2024-11-25 20:43:56.740207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.740252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:48.940 [2024-11-25 20:43:56.744954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.744989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:48.940 [2024-11-25 20:43:56.745009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.686 ms 00:25:48.940 [2024-11-25 20:43:56.745020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.745335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.745352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:48.940 [2024-11-25 20:43:56.745366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:25:48.940 [2024-11-25 20:43:56.745377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.748821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.748859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:48.940 [2024-11-25 20:43:56.748878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:25:48.940 [2024-11-25 20:43:56.748889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.754711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.754872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:48.940 [2024-11-25 20:43:56.754901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.689 ms 00:25:48.940 [2024-11-25 20:43:56.754912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.771141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.771193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:48.940 [2024-11-25 20:43:56.771216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.177 ms 00:25:48.940 [2024-11-25 20:43:56.771227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.782026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.782182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:48.940 [2024-11-25 20:43:56.782212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.739 ms 00:25:48.940 [2024-11-25 20:43:56.782223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.782395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.782411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:48.940 [2024-11-25 20:43:56.782426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:48.940 [2024-11-25 20:43:56.782436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.798563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.798602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:48.940 [2024-11-25 20:43:56.798620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.125 ms 00:25:48.940 [2024-11-25 20:43:56.798631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.813682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.813720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:48.940 [2024-11-25 20:43:56.813742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.017 ms 00:25:48.940 [2024-11-25 20:43:56.813752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.828728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.828903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:48.940 [2024-11-25 20:43:56.828936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.940 ms 00:25:48.940 [2024-11-25 20:43:56.828947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.843733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.940 [2024-11-25 20:43:56.843918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:48.940 [2024-11-25 20:43:56.843947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.715 ms 00:25:48.940 [2024-11-25 20:43:56.843958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.940 [2024-11-25 20:43:56.844059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:48.940 [2024-11-25 20:43:56.844082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:48.940 [2024-11-25 20:43:56.844384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.844994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:48.941 [2024-11-25 20:43:56.845500] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:48.941 [2024-11-25 20:43:56.845522] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:25:48.941 [2024-11-25 20:43:56.845538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:48.941 [2024-11-25 20:43:56.845551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:48.941 [2024-11-25 20:43:56.845562] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:48.941 [2024-11-25 20:43:56.845577] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:48.941 [2024-11-25 20:43:56.845587] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:48.941 [2024-11-25 20:43:56.845609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:48.941 [2024-11-25 20:43:56.845620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:48.941 [2024-11-25 20:43:56.845632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:48.941 [2024-11-25 20:43:56.845642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:48.942 [2024-11-25 20:43:56.845655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.942 [2024-11-25 20:43:56.845666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:48.942 [2024-11-25 20:43:56.845681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:25:48.942 [2024-11-25 20:43:56.845692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.942 [2024-11-25 20:43:56.867049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.942 [2024-11-25 20:43:56.867230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:48.942 [2024-11-25 20:43:56.867261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.358 ms 00:25:48.942 [2024-11-25 20:43:56.867273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.942 [2024-11-25 20:43:56.867953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.942 [2024-11-25 20:43:56.867971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:48.942 [2024-11-25 20:43:56.867990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:25:48.942 [2024-11-25 20:43:56.868000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.942 [2024-11-25 20:43:56.941478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.942 [2024-11-25 20:43:56.941559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:48.942 [2024-11-25 20:43:56.941580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.942 [2024-11-25 20:43:56.941597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.942 [2024-11-25 20:43:56.941794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.942 [2024-11-25 20:43:56.941808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:48.942 [2024-11-25 20:43:56.941828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.942 [2024-11-25 20:43:56.941839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.942 [2024-11-25 20:43:56.941915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.942 [2024-11-25 20:43:56.941930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:48.942 [2024-11-25 20:43:56.941948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.942 [2024-11-25 20:43:56.941960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.942 [2024-11-25 20:43:56.941984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.942 [2024-11-25 20:43:56.941997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:48.942 [2024-11-25 20:43:56.942011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.942 [2024-11-25 20:43:56.942025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.201 [2024-11-25 20:43:57.081056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.201 [2024-11-25 20:43:57.081144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.201 [2024-11-25 20:43:57.081169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.201 [2024-11-25 20:43:57.081181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.201 [2024-11-25 20:43:57.194617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.201 [2024-11-25 20:43:57.194706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.202 [2024-11-25 20:43:57.194728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.194744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.194898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.202 [2024-11-25 20:43:57.194911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.202 [2024-11-25 20:43:57.194930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.194941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.194978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.202 [2024-11-25 20:43:57.194990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.202 [2024-11-25 20:43:57.195004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.195015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.195167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.202 [2024-11-25 20:43:57.195181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.202 [2024-11-25 20:43:57.195195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.195205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.195253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.202 [2024-11-25 20:43:57.195266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:49.202 [2024-11-25 20:43:57.195280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.195290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.195368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.202 [2024-11-25 20:43:57.195381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.202 [2024-11-25 20:43:57.195400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.195411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.195470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:49.202 [2024-11-25 20:43:57.195483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.202 [2024-11-25 20:43:57.195497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:49.202 [2024-11-25 20:43:57.195508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.202 [2024-11-25 20:43:57.195680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.524 ms, result 0 00:25:50.581 20:43:58 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:50.581 20:43:58 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:50.581 [2024-11-25 20:43:58.417304] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:25:50.581 [2024-11-25 20:43:58.417471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78945 ] 00:25:50.581 [2024-11-25 20:43:58.601217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.841 [2024-11-25 20:43:58.748546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.100 [2024-11-25 20:43:59.158680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:51.100 [2024-11-25 20:43:59.158772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:51.360 [2024-11-25 20:43:59.325986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.326286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:51.360 [2024-11-25 20:43:59.326317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:51.360 [2024-11-25 20:43:59.326343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.329982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.330026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:51.360 [2024-11-25 20:43:59.330041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:25:51.360 [2024-11-25 20:43:59.330052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.330165] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:51.360 [2024-11-25 20:43:59.331240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:51.360 [2024-11-25 20:43:59.331276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.331288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:51.360 [2024-11-25 20:43:59.331300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:25:51.360 [2024-11-25 20:43:59.331311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.333766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:51.360 [2024-11-25 20:43:59.354703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.354765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:51.360 [2024-11-25 20:43:59.354784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.970 ms 00:25:51.360 [2024-11-25 20:43:59.354795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.354933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.354950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:51.360 [2024-11-25 20:43:59.354963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:51.360 [2024-11-25 20:43:59.354974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.367782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.367829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:51.360 [2024-11-25 20:43:59.367846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:25:51.360 [2024-11-25 20:43:59.367874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.368035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.368053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:51.360 [2024-11-25 20:43:59.368065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:51.360 [2024-11-25 20:43:59.368076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.368117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.368129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:51.360 [2024-11-25 20:43:59.368140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:51.360 [2024-11-25 20:43:59.368152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.368182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:51.360 [2024-11-25 20:43:59.373983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.374022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:51.360 [2024-11-25 20:43:59.374037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.821 ms 00:25:51.360 [2024-11-25 20:43:59.374048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.374110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.374124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:51.360 [2024-11-25 20:43:59.374136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:51.360 [2024-11-25 20:43:59.374147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.374176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:51.360 [2024-11-25 20:43:59.374205] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:51.360 [2024-11-25 20:43:59.374246] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:51.360 [2024-11-25 20:43:59.374266] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:51.360 [2024-11-25 20:43:59.374376] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:51.360 [2024-11-25 20:43:59.374391] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:51.360 [2024-11-25 20:43:59.374406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:51.360 [2024-11-25 20:43:59.374425] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:51.360 [2024-11-25 20:43:59.374438] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:51.360 [2024-11-25 20:43:59.374451] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:51.360 [2024-11-25 20:43:59.374461] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:51.360 [2024-11-25 20:43:59.374472] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:51.360 [2024-11-25 20:43:59.374483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:51.360 [2024-11-25 20:43:59.374496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.374507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:51.360 [2024-11-25 20:43:59.374518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:25:51.360 [2024-11-25 20:43:59.374528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.374608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.374623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:51.360 [2024-11-25 20:43:59.374634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:51.360 [2024-11-25 20:43:59.374644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.360 [2024-11-25 20:43:59.374746] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:51.360 [2024-11-25 20:43:59.374760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:51.360 [2024-11-25 20:43:59.374772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:51.360 [2024-11-25 20:43:59.374784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.374795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:51.360 [2024-11-25 20:43:59.374805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.374814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:51.360 [2024-11-25 20:43:59.374825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:51.360 [2024-11-25 20:43:59.374835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:51.360 [2024-11-25 20:43:59.374845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:51.360 [2024-11-25 20:43:59.374855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:51.360 [2024-11-25 20:43:59.374877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:51.360 [2024-11-25 20:43:59.374890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:51.360 [2024-11-25 20:43:59.374901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:51.360 [2024-11-25 20:43:59.374911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:51.360 [2024-11-25 20:43:59.374921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.374931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:51.360 [2024-11-25 20:43:59.374940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:51.360 [2024-11-25 20:43:59.374950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.374960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:51.360 [2024-11-25 20:43:59.374970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:51.360 [2024-11-25 20:43:59.374979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.360 [2024-11-25 20:43:59.374988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:51.360 [2024-11-25 20:43:59.374998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.360 [2024-11-25 20:43:59.375017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:51.360 [2024-11-25 20:43:59.375027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.360 [2024-11-25 20:43:59.375045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:51.360 [2024-11-25 20:43:59.375055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:51.360 [2024-11-25 20:43:59.375072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:51.360 [2024-11-25 20:43:59.375081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:51.360 [2024-11-25 20:43:59.375100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:51.360 [2024-11-25 20:43:59.375110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:51.360 [2024-11-25 20:43:59.375118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:51.360 [2024-11-25 20:43:59.375128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:51.360 [2024-11-25 20:43:59.375137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:51.360 [2024-11-25 20:43:59.375146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:51.360 [2024-11-25 20:43:59.375165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:51.360 [2024-11-25 20:43:59.375175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375184] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:51.360 [2024-11-25 20:43:59.375195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:51.360 [2024-11-25 20:43:59.375210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:51.360 [2024-11-25 20:43:59.375220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:51.360 [2024-11-25 20:43:59.375231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:51.360 [2024-11-25 20:43:59.375240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:51.360 [2024-11-25 20:43:59.375251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:51.360 [2024-11-25 20:43:59.375260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:51.360 [2024-11-25 20:43:59.375270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:51.360 [2024-11-25 20:43:59.375280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:51.360 [2024-11-25 20:43:59.375291] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:51.360 [2024-11-25 20:43:59.375304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.375317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:51.360 [2024-11-25 20:43:59.375582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:51.360 [2024-11-25 20:43:59.375651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:51.360 [2024-11-25 20:43:59.375699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:51.360 [2024-11-25 20:43:59.375747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:51.360 [2024-11-25 20:43:59.375796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:51.360 [2024-11-25 20:43:59.375842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:51.360 [2024-11-25 20:43:59.375948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:51.360 [2024-11-25 20:43:59.375998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:51.360 [2024-11-25 20:43:59.376046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.376093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.376141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.376235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.376287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:51.360 [2024-11-25 20:43:59.376346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:51.360 [2024-11-25 20:43:59.376398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.376447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:51.360 [2024-11-25 20:43:59.376617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:51.360 [2024-11-25 20:43:59.376665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:51.360 [2024-11-25 20:43:59.376713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:51.360 [2024-11-25 20:43:59.376802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.360 [2024-11-25 20:43:59.376847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:51.361 [2024-11-25 20:43:59.376879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:25:51.361 [2024-11-25 20:43:59.376958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.361 [2024-11-25 20:43:59.427099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.361 [2024-11-25 20:43:59.427460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.361 [2024-11-25 20:43:59.427491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.108 ms 00:25:51.361 [2024-11-25 20:43:59.427505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.361 [2024-11-25 20:43:59.427750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.361 [2024-11-25 20:43:59.427765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:51.361 [2024-11-25 20:43:59.427777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:51.361 [2024-11-25 20:43:59.427789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.493118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.493201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.621 [2024-11-25 20:43:59.493223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.404 ms 00:25:51.621 [2024-11-25 20:43:59.493235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.493409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.493424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.621 [2024-11-25 20:43:59.493438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:51.621 [2024-11-25 20:43:59.493449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.494209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.494224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.621 [2024-11-25 20:43:59.494237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:25:51.621 [2024-11-25 20:43:59.494255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.494415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.494431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.621 [2024-11-25 20:43:59.494442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:25:51.621 [2024-11-25 20:43:59.494453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.518508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.518809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.621 [2024-11-25 20:43:59.518839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.066 ms 00:25:51.621 [2024-11-25 20:43:59.518852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.540044] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:51.621 [2024-11-25 20:43:59.540110] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:51.621 [2024-11-25 20:43:59.540128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.540156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:51.621 [2024-11-25 20:43:59.540173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.119 ms 00:25:51.621 [2024-11-25 20:43:59.540183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.571477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.571713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:51.621 [2024-11-25 20:43:59.571742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.176 ms 00:25:51.621 [2024-11-25 20:43:59.571754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.591778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.591853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:51.621 [2024-11-25 20:43:59.591872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.938 ms 00:25:51.621 [2024-11-25 20:43:59.591884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.611433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.611523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:51.621 [2024-11-25 20:43:59.611559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.448 ms 00:25:51.621 [2024-11-25 20:43:59.611571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.612473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.612499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:51.621 [2024-11-25 20:43:59.612513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:25:51.621 [2024-11-25 20:43:59.612524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.715623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.621 [2024-11-25 20:43:59.715747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:51.621 [2024-11-25 20:43:59.715785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.226 ms 00:25:51.621 [2024-11-25 20:43:59.715797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.621 [2024-11-25 20:43:59.730067] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:51.881 [2024-11-25 20:43:59.757148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.757238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:51.881 [2024-11-25 20:43:59.757258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.215 ms 00:25:51.881 [2024-11-25 20:43:59.757281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.757500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.757518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:51.881 [2024-11-25 20:43:59.757531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:51.881 [2024-11-25 20:43:59.757542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.757626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.757640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:51.881 [2024-11-25 20:43:59.757651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:51.881 [2024-11-25 20:43:59.757668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.757703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.757716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:51.881 [2024-11-25 20:43:59.757727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:51.881 [2024-11-25 20:43:59.757738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.757779] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:51.881 [2024-11-25 20:43:59.757792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.757803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:51.881 [2024-11-25 20:43:59.757814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:51.881 [2024-11-25 20:43:59.757824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.796169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.796250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:51.881 [2024-11-25 20:43:59.796271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.377 ms 00:25:51.881 [2024-11-25 20:43:59.796298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.796488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.881 [2024-11-25 20:43:59.796504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:51.881 [2024-11-25 20:43:59.796518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:51.881 [2024-11-25 20:43:59.796529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.881 [2024-11-25 20:43:59.797935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:51.881 [2024-11-25 20:43:59.803701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 472.341 ms, result 0 00:25:51.881 [2024-11-25 20:43:59.804724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:51.881 [2024-11-25 20:43:59.823602] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:52.820  [2024-11-25T20:44:01.893Z] Copying: 28/256 [MB] (28 MBps) [2024-11-25T20:44:02.862Z] Copying: 53/256 [MB] (24 MBps) [2024-11-25T20:44:04.238Z] Copying: 77/256 [MB] (24 MBps) [2024-11-25T20:44:05.175Z] Copying: 100/256 [MB] (23 MBps) [2024-11-25T20:44:06.112Z] Copying: 125/256 [MB] (24 MBps) [2024-11-25T20:44:07.051Z] Copying: 150/256 [MB] (24 MBps) [2024-11-25T20:44:08.017Z] Copying: 175/256 [MB] (25 MBps) [2024-11-25T20:44:08.956Z] Copying: 200/256 [MB] (25 MBps) [2024-11-25T20:44:09.892Z] Copying: 225/256 [MB] (24 MBps) [2024-11-25T20:44:10.151Z] Copying: 250/256 [MB] (25 MBps) [2024-11-25T20:44:10.151Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-25 20:44:10.036215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:02.015 [2024-11-25 20:44:10.051462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.015 [2024-11-25 20:44:10.051518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.015 [2024-11-25 20:44:10.051536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:02.015 [2024-11-25 20:44:10.051555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.051579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:02.016 [2024-11-25 20:44:10.056296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.056335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.016 [2024-11-25 20:44:10.056348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.708 ms 00:26:02.016 [2024-11-25 20:44:10.056359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.056609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.056622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.016 [2024-11-25 20:44:10.056633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:26:02.016 [2024-11-25 20:44:10.056643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.059498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.059526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.016 [2024-11-25 20:44:10.059536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:26:02.016 [2024-11-25 20:44:10.059545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.065071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.065102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:02.016 [2024-11-25 20:44:10.065114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.499 ms 00:26:02.016 [2024-11-25 20:44:10.065123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.099259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.099296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:02.016 [2024-11-25 20:44:10.099309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.128 ms 00:26:02.016 [2024-11-25 20:44:10.099320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.119750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.119798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:02.016 [2024-11-25 20:44:10.119819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.384 ms 00:26:02.016 [2024-11-25 20:44:10.119829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.016 [2024-11-25 20:44:10.119958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.016 [2024-11-25 20:44:10.119971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:02.016 [2024-11-25 20:44:10.119994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:02.016 [2024-11-25 20:44:10.120004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.277 [2024-11-25 20:44:10.155438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.277 [2024-11-25 20:44:10.155474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:02.277 [2024-11-25 20:44:10.155487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.473 ms 00:26:02.277 [2024-11-25 20:44:10.155497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.277 [2024-11-25 20:44:10.190269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.277 [2024-11-25 20:44:10.190433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:02.277 [2024-11-25 20:44:10.190455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.761 ms 00:26:02.277 [2024-11-25 20:44:10.190466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.277 [2024-11-25 20:44:10.226527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.277 [2024-11-25 20:44:10.226575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:02.277 [2024-11-25 20:44:10.226589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.065 ms 00:26:02.277 [2024-11-25 20:44:10.226598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.277 [2024-11-25 20:44:10.260470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.277 [2024-11-25 20:44:10.260602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:02.277 [2024-11-25 20:44:10.260638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.844 ms 00:26:02.277 [2024-11-25 20:44:10.260648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.277 [2024-11-25 20:44:10.260735] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:02.277 [2024-11-25 20:44:10.260756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.260991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:02.277 [2024-11-25 20:44:10.261274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:02.278 [2024-11-25 20:44:10.261891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:02.278 [2024-11-25 20:44:10.261902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:26:02.278 [2024-11-25 20:44:10.261913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:02.278 [2024-11-25 20:44:10.261923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:02.278 [2024-11-25 20:44:10.261934] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:02.278 [2024-11-25 20:44:10.261945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:02.278 [2024-11-25 20:44:10.261955] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:02.278 [2024-11-25 20:44:10.261966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:02.278 [2024-11-25 20:44:10.261975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:02.278 [2024-11-25 20:44:10.261984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:02.278 [2024-11-25 20:44:10.261993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:02.278 [2024-11-25 20:44:10.262004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.278 [2024-11-25 20:44:10.262019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:02.278 [2024-11-25 20:44:10.262030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:26:02.278 [2024-11-25 20:44:10.262040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.278 [2024-11-25 20:44:10.282238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.278 [2024-11-25 20:44:10.282417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:02.278 [2024-11-25 20:44:10.282438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.208 ms 00:26:02.278 [2024-11-25 20:44:10.282450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.278 [2024-11-25 20:44:10.283105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.278 [2024-11-25 20:44:10.283121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:02.278 [2024-11-25 20:44:10.283133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:26:02.278 [2024-11-25 20:44:10.283143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.278 [2024-11-25 20:44:10.338163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.278 [2024-11-25 20:44:10.338349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.278 [2024-11-25 20:44:10.338372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.278 [2024-11-25 20:44:10.338384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.278 [2024-11-25 20:44:10.338480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.278 [2024-11-25 20:44:10.338492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.278 [2024-11-25 20:44:10.338504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.278 [2024-11-25 20:44:10.338514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.278 [2024-11-25 20:44:10.338568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.278 [2024-11-25 20:44:10.338582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.278 [2024-11-25 20:44:10.338593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.278 [2024-11-25 20:44:10.338604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.278 [2024-11-25 20:44:10.338629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.278 [2024-11-25 20:44:10.338640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.278 [2024-11-25 20:44:10.338651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.278 [2024-11-25 20:44:10.338672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.537 [2024-11-25 20:44:10.465755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.465981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.538 [2024-11-25 20:44:10.466008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.466020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.569568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.569629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.538 [2024-11-25 20:44:10.569645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.569672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.569780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.569803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.538 [2024-11-25 20:44:10.569815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.569826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.569860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.569871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.538 [2024-11-25 20:44:10.569887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.569898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.570026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.570039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.538 [2024-11-25 20:44:10.570050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.570061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.570105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.570117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:02.538 [2024-11-25 20:44:10.570133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.570143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.570190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.570202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.538 [2024-11-25 20:44:10.570212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.570223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.570276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.538 [2024-11-25 20:44:10.570289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.538 [2024-11-25 20:44:10.570304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.538 [2024-11-25 20:44:10.570315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.538 [2024-11-25 20:44:10.570737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.102 ms, result 0 00:26:03.916 00:26:03.916 00:26:03.916 20:44:11 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:26:03.916 20:44:11 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:04.175 20:44:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:04.176 [2024-11-25 20:44:12.253848] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:26:04.176 [2024-11-25 20:44:12.254148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79088 ] 00:26:04.434 [2024-11-25 20:44:12.438754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.694 [2024-11-25 20:44:12.583590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.953 [2024-11-25 20:44:12.995065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.953 [2024-11-25 20:44:12.995166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.213 [2024-11-25 20:44:13.162255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.162306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.213 [2024-11-25 20:44:13.162352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:05.213 [2024-11-25 20:44:13.162363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.165886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.165925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.213 [2024-11-25 20:44:13.165938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.506 ms 00:26:05.213 [2024-11-25 20:44:13.165949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.166053] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.213 [2024-11-25 20:44:13.166993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.213 [2024-11-25 20:44:13.167024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.167036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.213 [2024-11-25 20:44:13.167047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:26:05.213 [2024-11-25 20:44:13.167058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.169530] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:05.213 [2024-11-25 20:44:13.189294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.189351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:05.213 [2024-11-25 20:44:13.189367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.797 ms 00:26:05.213 [2024-11-25 20:44:13.189378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.189529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.189544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:05.213 [2024-11-25 20:44:13.189556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:05.213 [2024-11-25 20:44:13.189568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.201471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.201503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.213 [2024-11-25 20:44:13.201517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.870 ms 00:26:05.213 [2024-11-25 20:44:13.201527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.201660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.201677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.213 [2024-11-25 20:44:13.201689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:05.213 [2024-11-25 20:44:13.201700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.213 [2024-11-25 20:44:13.201733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.213 [2024-11-25 20:44:13.201745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.213 [2024-11-25 20:44:13.201756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:05.213 [2024-11-25 20:44:13.201767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.214 [2024-11-25 20:44:13.201793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:05.214 [2024-11-25 20:44:13.207576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.214 [2024-11-25 20:44:13.207610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.214 [2024-11-25 20:44:13.207623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.801 ms 00:26:05.214 [2024-11-25 20:44:13.207634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.214 [2024-11-25 20:44:13.207703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.214 [2024-11-25 20:44:13.207715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.214 [2024-11-25 20:44:13.207726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:05.214 [2024-11-25 20:44:13.207736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.214 [2024-11-25 20:44:13.207762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:05.214 [2024-11-25 20:44:13.207787] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:05.214 [2024-11-25 20:44:13.207827] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:05.214 [2024-11-25 20:44:13.207847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:05.214 [2024-11-25 20:44:13.207941] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.214 [2024-11-25 20:44:13.207955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.214 [2024-11-25 20:44:13.207969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:05.214 [2024-11-25 20:44:13.207986] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.214 [2024-11-25 20:44:13.207999] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208011] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:05.214 [2024-11-25 20:44:13.208021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.214 [2024-11-25 20:44:13.208032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.214 [2024-11-25 20:44:13.208042] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.214 [2024-11-25 20:44:13.208053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.214 [2024-11-25 20:44:13.208063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.214 [2024-11-25 20:44:13.208075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:26:05.214 [2024-11-25 20:44:13.208085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.214 [2024-11-25 20:44:13.208163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.214 [2024-11-25 20:44:13.208179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.214 [2024-11-25 20:44:13.208190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:05.214 [2024-11-25 20:44:13.208201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.214 [2024-11-25 20:44:13.208295] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.214 [2024-11-25 20:44:13.208309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.214 [2024-11-25 20:44:13.208320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.214 [2024-11-25 20:44:13.208367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.214 [2024-11-25 20:44:13.208396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.214 [2024-11-25 20:44:13.208415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.214 [2024-11-25 20:44:13.208438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:05.214 [2024-11-25 20:44:13.208449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.214 [2024-11-25 20:44:13.208459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.214 [2024-11-25 20:44:13.208468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:05.214 [2024-11-25 20:44:13.208478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.214 [2024-11-25 20:44:13.208497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.214 [2024-11-25 20:44:13.208526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.214 [2024-11-25 20:44:13.208555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.214 [2024-11-25 20:44:13.208582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.214 [2024-11-25 20:44:13.208609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.214 [2024-11-25 20:44:13.208636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.214 [2024-11-25 20:44:13.208654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.214 [2024-11-25 20:44:13.208663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:05.214 [2024-11-25 20:44:13.208671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.214 [2024-11-25 20:44:13.208680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.214 [2024-11-25 20:44:13.208689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:05.214 [2024-11-25 20:44:13.208698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.214 [2024-11-25 20:44:13.208716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:05.214 [2024-11-25 20:44:13.208725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208735] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.214 [2024-11-25 20:44:13.208745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.214 [2024-11-25 20:44:13.208759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.214 [2024-11-25 20:44:13.208781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.214 [2024-11-25 20:44:13.208791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.214 [2024-11-25 20:44:13.208800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.214 [2024-11-25 20:44:13.208811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.214 [2024-11-25 20:44:13.208820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.214 [2024-11-25 20:44:13.208830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.214 [2024-11-25 20:44:13.208841] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.214 [2024-11-25 20:44:13.208854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.214 [2024-11-25 20:44:13.208865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:05.214 [2024-11-25 20:44:13.208874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:05.214 [2024-11-25 20:44:13.208884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:05.214 [2024-11-25 20:44:13.208895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:05.214 [2024-11-25 20:44:13.208911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:05.214 [2024-11-25 20:44:13.208922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:05.214 [2024-11-25 20:44:13.208933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:05.214 [2024-11-25 20:44:13.208944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:05.214 [2024-11-25 20:44:13.208954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:05.214 [2024-11-25 20:44:13.208964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:05.214 [2024-11-25 20:44:13.208975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:05.214 [2024-11-25 20:44:13.208985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:05.215 [2024-11-25 20:44:13.208995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:05.215 [2024-11-25 20:44:13.209006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:05.215 [2024-11-25 20:44:13.209016] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.215 [2024-11-25 20:44:13.209028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.215 [2024-11-25 20:44:13.209039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.215 [2024-11-25 20:44:13.209049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.215 [2024-11-25 20:44:13.209059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.215 [2024-11-25 20:44:13.209069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.215 [2024-11-25 20:44:13.209082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.209097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.215 [2024-11-25 20:44:13.209107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:26:05.215 [2024-11-25 20:44:13.209117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.215 [2024-11-25 20:44:13.257323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.257379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.215 [2024-11-25 20:44:13.257410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.225 ms 00:26:05.215 [2024-11-25 20:44:13.257422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.215 [2024-11-25 20:44:13.257580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.257594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:05.215 [2024-11-25 20:44:13.257614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:05.215 [2024-11-25 20:44:13.257640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.215 [2024-11-25 20:44:13.326071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.326110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.215 [2024-11-25 20:44:13.326130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.517 ms 00:26:05.215 [2024-11-25 20:44:13.326157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.215 [2024-11-25 20:44:13.326244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.326258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.215 [2024-11-25 20:44:13.326270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:05.215 [2024-11-25 20:44:13.326281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.215 [2024-11-25 20:44:13.327028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.327042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.215 [2024-11-25 20:44:13.327054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:26:05.215 [2024-11-25 20:44:13.327072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.215 [2024-11-25 20:44:13.327208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.215 [2024-11-25 20:44:13.327222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.215 [2024-11-25 20:44:13.327234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:05.215 [2024-11-25 20:44:13.327244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.351700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.351738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.475 [2024-11-25 20:44:13.351754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.470 ms 00:26:05.475 [2024-11-25 20:44:13.351766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.371644] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:05.475 [2024-11-25 20:44:13.371682] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:05.475 [2024-11-25 20:44:13.371713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.371725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:05.475 [2024-11-25 20:44:13.371737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.856 ms 00:26:05.475 [2024-11-25 20:44:13.371747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.400863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.400903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:05.475 [2024-11-25 20:44:13.400933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.078 ms 00:26:05.475 [2024-11-25 20:44:13.400944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.418659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.418697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:05.475 [2024-11-25 20:44:13.418710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.660 ms 00:26:05.475 [2024-11-25 20:44:13.418720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.436267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.436302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:05.475 [2024-11-25 20:44:13.436315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.480 ms 00:26:05.475 [2024-11-25 20:44:13.436332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.437121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.437149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.475 [2024-11-25 20:44:13.437161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:26:05.475 [2024-11-25 20:44:13.437172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.529938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.530026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:05.475 [2024-11-25 20:44:13.530045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.886 ms 00:26:05.475 [2024-11-25 20:44:13.530056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.540398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:05.475 [2024-11-25 20:44:13.564530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.564583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:05.475 [2024-11-25 20:44:13.564600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.434 ms 00:26:05.475 [2024-11-25 20:44:13.564634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.564767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.564781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:05.475 [2024-11-25 20:44:13.564793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:05.475 [2024-11-25 20:44:13.564804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.564875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.564887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:05.475 [2024-11-25 20:44:13.564897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:05.475 [2024-11-25 20:44:13.564913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.564947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.564959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.475 [2024-11-25 20:44:13.564970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:05.475 [2024-11-25 20:44:13.564981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.565037] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:05.475 [2024-11-25 20:44:13.565050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.565061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:05.475 [2024-11-25 20:44:13.565072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:05.475 [2024-11-25 20:44:13.565081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.601417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.601458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.475 [2024-11-25 20:44:13.601489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.368 ms 00:26:05.475 [2024-11-25 20:44:13.601500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.601637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.475 [2024-11-25 20:44:13.601668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.475 [2024-11-25 20:44:13.601680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:05.475 [2024-11-25 20:44:13.601690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.475 [2024-11-25 20:44:13.602955] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.475 [2024-11-25 20:44:13.607237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 441.040 ms, result 0 00:26:05.734 [2024-11-25 20:44:13.608207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:05.734 [2024-11-25 20:44:13.626542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.734  [2024-11-25T20:44:13.870Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-25 20:44:13.803838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:05.734 [2024-11-25 20:44:13.818132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.734 [2024-11-25 20:44:13.818173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:05.734 [2024-11-25 20:44:13.818187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:05.734 [2024-11-25 20:44:13.818204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.734 [2024-11-25 20:44:13.818243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:05.734 [2024-11-25 20:44:13.822832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.734 [2024-11-25 20:44:13.822861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:05.734 [2024-11-25 20:44:13.822874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 00:26:05.734 [2024-11-25 20:44:13.822900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.734 [2024-11-25 20:44:13.825140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.734 [2024-11-25 20:44:13.825177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:05.734 [2024-11-25 20:44:13.825190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:26:05.734 [2024-11-25 20:44:13.825201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.735 [2024-11-25 20:44:13.828432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.735 [2024-11-25 20:44:13.828471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:05.735 [2024-11-25 20:44:13.828484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:26:05.735 [2024-11-25 20:44:13.828494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.735 [2024-11-25 20:44:13.833947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.735 [2024-11-25 20:44:13.833979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:05.735 [2024-11-25 20:44:13.833991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.431 ms 00:26:05.735 [2024-11-25 20:44:13.834018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:13.869471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:13.869511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:05.995 [2024-11-25 20:44:13.869544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.458 ms 00:26:05.995 [2024-11-25 20:44:13.869557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:13.890218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:13.890258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:05.995 [2024-11-25 20:44:13.890279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.606 ms 00:26:05.995 [2024-11-25 20:44:13.890290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:13.890440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:13.890459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:05.995 [2024-11-25 20:44:13.890492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:05.995 [2024-11-25 20:44:13.890502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:13.925807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:13.925844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:05.995 [2024-11-25 20:44:13.925858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.343 ms 00:26:05.995 [2024-11-25 20:44:13.925868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:13.960763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:13.960800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:05.995 [2024-11-25 20:44:13.960829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.882 ms 00:26:05.995 [2024-11-25 20:44:13.960839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:13.996208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:13.996245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:05.995 [2024-11-25 20:44:13.996274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.370 ms 00:26:05.995 [2024-11-25 20:44:13.996285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:14.031326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.995 [2024-11-25 20:44:14.031369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:05.995 [2024-11-25 20:44:14.031382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.008 ms 00:26:05.995 [2024-11-25 20:44:14.031392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.995 [2024-11-25 20:44:14.031460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:05.995 [2024-11-25 20:44:14.031478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:05.995 [2024-11-25 20:44:14.031599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.031995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:05.996 [2024-11-25 20:44:14.032567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:05.996 [2024-11-25 20:44:14.032578] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:26:05.997 [2024-11-25 20:44:14.032589] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:05.997 [2024-11-25 20:44:14.032599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:05.997 [2024-11-25 20:44:14.032609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:05.997 [2024-11-25 20:44:14.032620] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:05.997 [2024-11-25 20:44:14.032630] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:05.997 [2024-11-25 20:44:14.032640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:05.997 [2024-11-25 20:44:14.032654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:05.997 [2024-11-25 20:44:14.032663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:05.997 [2024-11-25 20:44:14.032673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:05.997 [2024-11-25 20:44:14.032683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.997 [2024-11-25 20:44:14.032694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:05.997 [2024-11-25 20:44:14.032704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:26:05.997 [2024-11-25 20:44:14.032714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.997 [2024-11-25 20:44:14.053728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.997 [2024-11-25 20:44:14.053763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:05.997 [2024-11-25 20:44:14.053775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.028 ms 00:26:05.997 [2024-11-25 20:44:14.053786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.997 [2024-11-25 20:44:14.054453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.997 [2024-11-25 20:44:14.054466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:05.997 [2024-11-25 20:44:14.054477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:26:05.997 [2024-11-25 20:44:14.054487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.997 [2024-11-25 20:44:14.112039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.997 [2024-11-25 20:44:14.112078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.997 [2024-11-25 20:44:14.112108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.997 [2024-11-25 20:44:14.112125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.997 [2024-11-25 20:44:14.112226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.997 [2024-11-25 20:44:14.112238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.997 [2024-11-25 20:44:14.112249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.997 [2024-11-25 20:44:14.112259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.997 [2024-11-25 20:44:14.112310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.997 [2024-11-25 20:44:14.112323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.997 [2024-11-25 20:44:14.112334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.997 [2024-11-25 20:44:14.112356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.997 [2024-11-25 20:44:14.112382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:05.997 [2024-11-25 20:44:14.112394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.997 [2024-11-25 20:44:14.112404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:05.997 [2024-11-25 20:44:14.112414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.247629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.247704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:06.256 [2024-11-25 20:44:14.247722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.247749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.355638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.355697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:06.256 [2024-11-25 20:44:14.355714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.355727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.355848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.355862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:06.256 [2024-11-25 20:44:14.355873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.355884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.355917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.355936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:06.256 [2024-11-25 20:44:14.355947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.355958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.356088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.356101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:06.256 [2024-11-25 20:44:14.356112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.356123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.356168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.356182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:06.256 [2024-11-25 20:44:14.356197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.356209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.356258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.356270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:06.256 [2024-11-25 20:44:14.356280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.356291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.256 [2024-11-25 20:44:14.356365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.256 [2024-11-25 20:44:14.356383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:06.256 [2024-11-25 20:44:14.356394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.256 [2024-11-25 20:44:14.356405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.257 [2024-11-25 20:44:14.356571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.293 ms, result 0 00:26:07.635 00:26:07.635 00:26:07.635 20:44:15 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79129 00:26:07.635 20:44:15 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:26:07.635 20:44:15 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79129 00:26:07.635 20:44:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79129 ']' 00:26:07.635 20:44:15 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.635 20:44:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.635 20:44:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.635 20:44:15 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.635 20:44:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:07.635 [2024-11-25 20:44:15.617260] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:26:07.635 [2024-11-25 20:44:15.617405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79129 ] 00:26:07.894 [2024-11-25 20:44:15.796826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.894 [2024-11-25 20:44:15.933766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.829 20:44:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.829 20:44:16 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:26:08.829 20:44:16 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:09.087 [2024-11-25 20:44:17.136402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.087 [2024-11-25 20:44:17.136476] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.347 [2024-11-25 20:44:17.319109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.319175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:09.347 [2024-11-25 20:44:17.319196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:09.347 [2024-11-25 20:44:17.319224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.322792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.322835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:09.347 [2024-11-25 20:44:17.322851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:26:09.347 [2024-11-25 20:44:17.322862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.322971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:09.347 [2024-11-25 20:44:17.323992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:09.347 [2024-11-25 20:44:17.324028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.324040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:09.347 [2024-11-25 20:44:17.324054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:26:09.347 [2024-11-25 20:44:17.324066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.326685] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:09.347 [2024-11-25 20:44:17.346127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.346173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:09.347 [2024-11-25 20:44:17.346188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.478 ms 00:26:09.347 [2024-11-25 20:44:17.346202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.346302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.346320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:09.347 [2024-11-25 20:44:17.346344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:09.347 [2024-11-25 20:44:17.346359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.359007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.359067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:09.347 [2024-11-25 20:44:17.359081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.613 ms 00:26:09.347 [2024-11-25 20:44:17.359095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.359233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.359252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:09.347 [2024-11-25 20:44:17.359263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:09.347 [2024-11-25 20:44:17.359282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.359318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.359333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:09.347 [2024-11-25 20:44:17.359360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:09.347 [2024-11-25 20:44:17.359374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.359404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:09.347 [2024-11-25 20:44:17.365097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.365129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:09.347 [2024-11-25 20:44:17.365162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.705 ms 00:26:09.347 [2024-11-25 20:44:17.365172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.365232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.365244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:09.347 [2024-11-25 20:44:17.365259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:09.347 [2024-11-25 20:44:17.365273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.365300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:09.347 [2024-11-25 20:44:17.365324] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:09.347 [2024-11-25 20:44:17.365383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:09.347 [2024-11-25 20:44:17.365406] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:09.347 [2024-11-25 20:44:17.365502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:09.347 [2024-11-25 20:44:17.365517] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:09.347 [2024-11-25 20:44:17.365539] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:09.347 [2024-11-25 20:44:17.365553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:09.347 [2024-11-25 20:44:17.365570] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:09.347 [2024-11-25 20:44:17.365582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:09.347 [2024-11-25 20:44:17.365595] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:09.347 [2024-11-25 20:44:17.365615] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:09.347 [2024-11-25 20:44:17.365632] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:09.347 [2024-11-25 20:44:17.365643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.365657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:09.347 [2024-11-25 20:44:17.365668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:26:09.347 [2024-11-25 20:44:17.365681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.365763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.347 [2024-11-25 20:44:17.365777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:09.347 [2024-11-25 20:44:17.365787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:09.347 [2024-11-25 20:44:17.365801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.347 [2024-11-25 20:44:17.365895] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:09.347 [2024-11-25 20:44:17.365911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:09.347 [2024-11-25 20:44:17.365923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:09.347 [2024-11-25 20:44:17.365936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.347 [2024-11-25 20:44:17.365948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:09.347 [2024-11-25 20:44:17.365960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:09.347 [2024-11-25 20:44:17.365970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:09.347 [2024-11-25 20:44:17.365987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:09.347 [2024-11-25 20:44:17.365996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:09.347 [2024-11-25 20:44:17.366008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:09.347 [2024-11-25 20:44:17.366018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:09.347 [2024-11-25 20:44:17.366033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:09.347 [2024-11-25 20:44:17.366042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:09.347 [2024-11-25 20:44:17.366054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:09.347 [2024-11-25 20:44:17.366064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:09.347 [2024-11-25 20:44:17.366075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.347 [2024-11-25 20:44:17.366084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:09.347 [2024-11-25 20:44:17.366096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:09.347 [2024-11-25 20:44:17.366116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.347 [2024-11-25 20:44:17.366129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:09.347 [2024-11-25 20:44:17.366138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.348 [2024-11-25 20:44:17.366159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:09.348 [2024-11-25 20:44:17.366174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.348 [2024-11-25 20:44:17.366196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:09.348 [2024-11-25 20:44:17.366205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.348 [2024-11-25 20:44:17.366226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:09.348 [2024-11-25 20:44:17.366239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.348 [2024-11-25 20:44:17.366260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:09.348 [2024-11-25 20:44:17.366269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:09.348 [2024-11-25 20:44:17.366291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:09.348 [2024-11-25 20:44:17.366303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:09.348 [2024-11-25 20:44:17.366312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:09.348 [2024-11-25 20:44:17.366334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:09.348 [2024-11-25 20:44:17.366344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:09.348 [2024-11-25 20:44:17.366360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:09.348 [2024-11-25 20:44:17.366388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:09.348 [2024-11-25 20:44:17.366397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366410] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:09.348 [2024-11-25 20:44:17.366423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:09.348 [2024-11-25 20:44:17.366437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:09.348 [2024-11-25 20:44:17.366447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.348 [2024-11-25 20:44:17.366461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:09.348 [2024-11-25 20:44:17.366471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:09.348 [2024-11-25 20:44:17.366483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:09.348 [2024-11-25 20:44:17.366492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:09.348 [2024-11-25 20:44:17.366504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:09.348 [2024-11-25 20:44:17.366514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:09.348 [2024-11-25 20:44:17.366527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:09.348 [2024-11-25 20:44:17.366539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:09.348 [2024-11-25 20:44:17.366568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:09.348 [2024-11-25 20:44:17.366583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:09.348 [2024-11-25 20:44:17.366593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:09.348 [2024-11-25 20:44:17.366607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:09.348 [2024-11-25 20:44:17.366617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:09.348 [2024-11-25 20:44:17.366630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:09.348 [2024-11-25 20:44:17.366640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:09.348 [2024-11-25 20:44:17.366654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:09.348 [2024-11-25 20:44:17.366665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:09.348 [2024-11-25 20:44:17.366724] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:09.348 [2024-11-25 20:44:17.366736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:09.348 [2024-11-25 20:44:17.366763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:09.348 [2024-11-25 20:44:17.366776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:09.348 [2024-11-25 20:44:17.366786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:09.348 [2024-11-25 20:44:17.366800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.348 [2024-11-25 20:44:17.366811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:09.348 [2024-11-25 20:44:17.366824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:26:09.348 [2024-11-25 20:44:17.366837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.348 [2024-11-25 20:44:17.417163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.348 [2024-11-25 20:44:17.417204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:09.348 [2024-11-25 20:44:17.417222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.336 ms 00:26:09.348 [2024-11-25 20:44:17.417237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.348 [2024-11-25 20:44:17.417455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.348 [2024-11-25 20:44:17.417473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:09.348 [2024-11-25 20:44:17.417489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:09.348 [2024-11-25 20:44:17.417500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.348 [2024-11-25 20:44:17.474104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.348 [2024-11-25 20:44:17.474141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:09.348 [2024-11-25 20:44:17.474158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.663 ms 00:26:09.348 [2024-11-25 20:44:17.474168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.348 [2024-11-25 20:44:17.474271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.348 [2024-11-25 20:44:17.474284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:09.348 [2024-11-25 20:44:17.474298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:09.348 [2024-11-25 20:44:17.474308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.348 [2024-11-25 20:44:17.475177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.348 [2024-11-25 20:44:17.475209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:09.348 [2024-11-25 20:44:17.475225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:26:09.348 [2024-11-25 20:44:17.475236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.349 [2024-11-25 20:44:17.475406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.349 [2024-11-25 20:44:17.475420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:09.349 [2024-11-25 20:44:17.475434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:26:09.349 [2024-11-25 20:44:17.475445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.607 [2024-11-25 20:44:17.502028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.607 [2024-11-25 20:44:17.502067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:09.607 [2024-11-25 20:44:17.502084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.595 ms 00:26:09.607 [2024-11-25 20:44:17.502096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.607 [2024-11-25 20:44:17.522539] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:09.608 [2024-11-25 20:44:17.522578] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:09.608 [2024-11-25 20:44:17.522614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.608 [2024-11-25 20:44:17.522626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:09.608 [2024-11-25 20:44:17.522640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.409 ms 00:26:09.608 [2024-11-25 20:44:17.522661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.608 [2024-11-25 20:44:17.552642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.608 [2024-11-25 20:44:17.552682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:09.608 [2024-11-25 20:44:17.552701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.920 ms 00:26:09.608 [2024-11-25 20:44:17.552712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.608 [2024-11-25 20:44:17.570965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.608 [2024-11-25 20:44:17.571002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:09.608 [2024-11-25 20:44:17.571037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.174 ms 00:26:09.608 [2024-11-25 20:44:17.571047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.608 [2024-11-25 20:44:17.587782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.608 [2024-11-25 20:44:17.587816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:09.608 [2024-11-25 20:44:17.587848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.681 ms 00:26:09.608 [2024-11-25 20:44:17.587857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.608 [2024-11-25 20:44:17.588666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.608 [2024-11-25 20:44:17.588699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:09.608 [2024-11-25 20:44:17.588715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:26:09.608 [2024-11-25 20:44:17.588725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.608 [2024-11-25 20:44:17.709393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.608 [2024-11-25 20:44:17.709498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:09.608 [2024-11-25 20:44:17.709538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.825 ms 00:26:09.608 [2024-11-25 20:44:17.709550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.608 [2024-11-25 20:44:17.720459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:09.867 [2024-11-25 20:44:17.746921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.746999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:09.867 [2024-11-25 20:44:17.747023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.317 ms 00:26:09.867 [2024-11-25 20:44:17.747038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.747194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.747213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:09.867 [2024-11-25 20:44:17.747225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:09.867 [2024-11-25 20:44:17.747239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.747313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.747347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:09.867 [2024-11-25 20:44:17.747359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:09.867 [2024-11-25 20:44:17.747377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.747407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.747422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:09.867 [2024-11-25 20:44:17.747433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:09.867 [2024-11-25 20:44:17.747446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.747492] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:09.867 [2024-11-25 20:44:17.747511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.747527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:09.867 [2024-11-25 20:44:17.747541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:09.867 [2024-11-25 20:44:17.747551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.785065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.785107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:09.867 [2024-11-25 20:44:17.785140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.538 ms 00:26:09.867 [2024-11-25 20:44:17.785151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.785276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.867 [2024-11-25 20:44:17.785290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:09.867 [2024-11-25 20:44:17.785308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:09.867 [2024-11-25 20:44:17.785319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.867 [2024-11-25 20:44:17.786712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:09.867 [2024-11-25 20:44:17.791120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 467.985 ms, result 0 00:26:09.867 [2024-11-25 20:44:17.792293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:09.867 Some configs were skipped because the RPC state that can call them passed over. 00:26:09.867 20:44:17 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:26:10.126 [2024-11-25 20:44:18.035740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.126 [2024-11-25 20:44:18.035807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:10.126 [2024-11-25 20:44:18.035825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:26:10.126 [2024-11-25 20:44:18.035841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.126 [2024-11-25 20:44:18.035882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.881 ms, result 0 00:26:10.126 true 00:26:10.126 20:44:18 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:10.126 [2024-11-25 20:44:18.239044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.126 [2024-11-25 20:44:18.239096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:10.126 [2024-11-25 20:44:18.239115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:26:10.126 [2024-11-25 20:44:18.239126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.126 [2024-11-25 20:44:18.239168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.316 ms, result 0 00:26:10.126 true 00:26:10.385 20:44:18 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79129 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79129 ']' 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79129 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79129 00:26:10.385 killing process with pid 79129 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79129' 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79129 00:26:10.385 20:44:18 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79129 00:26:11.763 [2024-11-25 20:44:19.541847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.541914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:11.763 [2024-11-25 20:44:19.541932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:11.763 [2024-11-25 20:44:19.541945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.541974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:11.763 [2024-11-25 20:44:19.546749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.546787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:11.763 [2024-11-25 20:44:19.546807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:26:11.763 [2024-11-25 20:44:19.546819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.547115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.547135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:11.763 [2024-11-25 20:44:19.547149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:26:11.763 [2024-11-25 20:44:19.547159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.550513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.550552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:11.763 [2024-11-25 20:44:19.550571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.332 ms 00:26:11.763 [2024-11-25 20:44:19.550581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.556195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.556232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:11.763 [2024-11-25 20:44:19.556262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.582 ms 00:26:11.763 [2024-11-25 20:44:19.556273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.571470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.571513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:11.763 [2024-11-25 20:44:19.571548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.154 ms 00:26:11.763 [2024-11-25 20:44:19.571558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.582024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.582077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:11.763 [2024-11-25 20:44:19.582096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.407 ms 00:26:11.763 [2024-11-25 20:44:19.582106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.582264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.582278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:11.763 [2024-11-25 20:44:19.582292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:11.763 [2024-11-25 20:44:19.582302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.597357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.597390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:11.763 [2024-11-25 20:44:19.597405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.056 ms 00:26:11.763 [2024-11-25 20:44:19.597429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.611876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.611907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:11.763 [2024-11-25 20:44:19.611941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.416 ms 00:26:11.763 [2024-11-25 20:44:19.611951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.625724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.625759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:11.763 [2024-11-25 20:44:19.625777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.743 ms 00:26:11.763 [2024-11-25 20:44:19.625787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.640033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.763 [2024-11-25 20:44:19.640062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:11.763 [2024-11-25 20:44:19.640093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.189 ms 00:26:11.763 [2024-11-25 20:44:19.640102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.763 [2024-11-25 20:44:19.640152] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:11.763 [2024-11-25 20:44:19.640170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:11.763 [2024-11-25 20:44:19.640754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.640989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:11.764 [2024-11-25 20:44:19.641459] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:11.764 [2024-11-25 20:44:19.641482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:26:11.764 [2024-11-25 20:44:19.641498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:11.764 [2024-11-25 20:44:19.641511] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:11.764 [2024-11-25 20:44:19.641521] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:11.764 [2024-11-25 20:44:19.641534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:11.764 [2024-11-25 20:44:19.641545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:11.764 [2024-11-25 20:44:19.641558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:11.764 [2024-11-25 20:44:19.641568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:11.764 [2024-11-25 20:44:19.641580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:11.764 [2024-11-25 20:44:19.641589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:11.764 [2024-11-25 20:44:19.641610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.764 [2024-11-25 20:44:19.641621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:11.764 [2024-11-25 20:44:19.641635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:26:11.764 [2024-11-25 20:44:19.641645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.662753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.764 [2024-11-25 20:44:19.662787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:11.764 [2024-11-25 20:44:19.662806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.110 ms 00:26:11.764 [2024-11-25 20:44:19.662816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.663488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.764 [2024-11-25 20:44:19.663512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:11.764 [2024-11-25 20:44:19.663531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:26:11.764 [2024-11-25 20:44:19.663541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.735355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.764 [2024-11-25 20:44:19.735399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.764 [2024-11-25 20:44:19.735417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.764 [2024-11-25 20:44:19.735445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.735558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.764 [2024-11-25 20:44:19.735571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.764 [2024-11-25 20:44:19.735590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.764 [2024-11-25 20:44:19.735601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.735660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.764 [2024-11-25 20:44:19.735674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.764 [2024-11-25 20:44:19.735692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.764 [2024-11-25 20:44:19.735703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.735727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.764 [2024-11-25 20:44:19.735738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.764 [2024-11-25 20:44:19.735752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.764 [2024-11-25 20:44:19.735765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.764 [2024-11-25 20:44:19.868074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.764 [2024-11-25 20:44:19.868142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.764 [2024-11-25 20:44:19.868162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.765 [2024-11-25 20:44:19.868174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.974746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.974821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.024 [2024-11-25 20:44:19.974842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.974858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.975018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.024 [2024-11-25 20:44:19.975037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.975048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.975099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.024 [2024-11-25 20:44:19.975112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.975122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.975276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.024 [2024-11-25 20:44:19.975291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.975301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.975377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:12.024 [2024-11-25 20:44:19.975391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.975402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.975471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.024 [2024-11-25 20:44:19.975489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.975500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.024 [2024-11-25 20:44:19.975572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.024 [2024-11-25 20:44:19.975586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.024 [2024-11-25 20:44:19.975597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.024 [2024-11-25 20:44:19.975771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 434.593 ms, result 0 00:26:13.403 20:44:21 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:13.403 [2024-11-25 20:44:21.198088] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:26:13.403 [2024-11-25 20:44:21.198248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79197 ] 00:26:13.403 [2024-11-25 20:44:21.383101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.403 [2024-11-25 20:44:21.527903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.972 [2024-11-25 20:44:21.943922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.972 [2024-11-25 20:44:21.943999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:14.233 [2024-11-25 20:44:22.111846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.111904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:14.233 [2024-11-25 20:44:22.111921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:14.233 [2024-11-25 20:44:22.111932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.115446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.115487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.233 [2024-11-25 20:44:22.115500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.498 ms 00:26:14.233 [2024-11-25 20:44:22.115510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.115615] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:14.233 [2024-11-25 20:44:22.116569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:14.233 [2024-11-25 20:44:22.116604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.116615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.233 [2024-11-25 20:44:22.116627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:26:14.233 [2024-11-25 20:44:22.116638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.119217] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:14.233 [2024-11-25 20:44:22.139813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.139850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:14.233 [2024-11-25 20:44:22.139865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.630 ms 00:26:14.233 [2024-11-25 20:44:22.139892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.139995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.140012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:14.233 [2024-11-25 20:44:22.140024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:14.233 [2024-11-25 20:44:22.140034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.152584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.152612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.233 [2024-11-25 20:44:22.152625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.525 ms 00:26:14.233 [2024-11-25 20:44:22.152650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.152797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.152814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.233 [2024-11-25 20:44:22.152826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:14.233 [2024-11-25 20:44:22.152837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.152881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.152896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:14.233 [2024-11-25 20:44:22.152907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:14.233 [2024-11-25 20:44:22.152917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.152942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:14.233 [2024-11-25 20:44:22.158738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.158771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.233 [2024-11-25 20:44:22.158783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.813 ms 00:26:14.233 [2024-11-25 20:44:22.158793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.158845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.233 [2024-11-25 20:44:22.158858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:14.233 [2024-11-25 20:44:22.158869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:14.233 [2024-11-25 20:44:22.158879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.233 [2024-11-25 20:44:22.158907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:14.233 [2024-11-25 20:44:22.158932] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:14.233 [2024-11-25 20:44:22.158969] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:14.233 [2024-11-25 20:44:22.158998] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:14.233 [2024-11-25 20:44:22.159088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:14.233 [2024-11-25 20:44:22.159101] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:14.234 [2024-11-25 20:44:22.159114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:14.234 [2024-11-25 20:44:22.159147] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159159] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159171] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:14.234 [2024-11-25 20:44:22.159181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:14.234 [2024-11-25 20:44:22.159193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:14.234 [2024-11-25 20:44:22.159204] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:14.234 [2024-11-25 20:44:22.159215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-11-25 20:44:22.159225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:14.234 [2024-11-25 20:44:22.159235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:26:14.234 [2024-11-25 20:44:22.159246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-11-25 20:44:22.159321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-11-25 20:44:22.159336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:14.234 [2024-11-25 20:44:22.159347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:14.234 [2024-11-25 20:44:22.159369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-11-25 20:44:22.159476] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:14.234 [2024-11-25 20:44:22.159500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:14.234 [2024-11-25 20:44:22.159512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:14.234 [2024-11-25 20:44:22.159545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:14.234 [2024-11-25 20:44:22.159574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:14.234 [2024-11-25 20:44:22.159594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:14.234 [2024-11-25 20:44:22.159615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:14.234 [2024-11-25 20:44:22.159624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:14.234 [2024-11-25 20:44:22.159634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:14.234 [2024-11-25 20:44:22.159644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:14.234 [2024-11-25 20:44:22.159654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:14.234 [2024-11-25 20:44:22.159673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:14.234 [2024-11-25 20:44:22.159700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:14.234 [2024-11-25 20:44:22.159728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:14.234 [2024-11-25 20:44:22.159755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:14.234 [2024-11-25 20:44:22.159782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:14.234 [2024-11-25 20:44:22.159808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:14.234 [2024-11-25 20:44:22.159826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:14.234 [2024-11-25 20:44:22.159835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:14.234 [2024-11-25 20:44:22.159844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:14.234 [2024-11-25 20:44:22.159852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:14.234 [2024-11-25 20:44:22.159861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:14.234 [2024-11-25 20:44:22.159870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:14.234 [2024-11-25 20:44:22.159888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:14.234 [2024-11-25 20:44:22.159898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:14.234 [2024-11-25 20:44:22.159917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:14.234 [2024-11-25 20:44:22.159931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:14.234 [2024-11-25 20:44:22.159941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-11-25 20:44:22.159952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:14.234 [2024-11-25 20:44:22.159961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:14.234 [2024-11-25 20:44:22.159971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:14.234 [2024-11-25 20:44:22.159980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:14.234 [2024-11-25 20:44:22.159990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:14.234 [2024-11-25 20:44:22.160000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:14.234 [2024-11-25 20:44:22.160011] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:14.234 [2024-11-25 20:44:22.160025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:14.234 [2024-11-25 20:44:22.160048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:14.234 [2024-11-25 20:44:22.160059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:14.234 [2024-11-25 20:44:22.160069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:14.234 [2024-11-25 20:44:22.160081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:14.234 [2024-11-25 20:44:22.160094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:14.234 [2024-11-25 20:44:22.160104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:14.234 [2024-11-25 20:44:22.160114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:14.234 [2024-11-25 20:44:22.160125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:14.234 [2024-11-25 20:44:22.160135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:14.234 [2024-11-25 20:44:22.160185] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:14.234 [2024-11-25 20:44:22.160196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:14.234 [2024-11-25 20:44:22.160216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:14.234 [2024-11-25 20:44:22.160226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:14.234 [2024-11-25 20:44:22.160237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:14.234 [2024-11-25 20:44:22.160248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-11-25 20:44:22.160264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:14.234 [2024-11-25 20:44:22.160274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:26:14.234 [2024-11-25 20:44:22.160283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-11-25 20:44:22.210834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-11-25 20:44:22.210882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.234 [2024-11-25 20:44:22.210897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.556 ms 00:26:14.234 [2024-11-25 20:44:22.210908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.211080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.211094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:14.235 [2024-11-25 20:44:22.211105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:14.235 [2024-11-25 20:44:22.211114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.288119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.288159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.235 [2024-11-25 20:44:22.288177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.106 ms 00:26:14.235 [2024-11-25 20:44:22.288203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.288286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.288300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.235 [2024-11-25 20:44:22.288312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:14.235 [2024-11-25 20:44:22.288322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.289115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.289135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.235 [2024-11-25 20:44:22.289147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:26:14.235 [2024-11-25 20:44:22.289164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.289300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.289320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.235 [2024-11-25 20:44:22.289347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:14.235 [2024-11-25 20:44:22.289358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.313490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.313526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.235 [2024-11-25 20:44:22.313540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.146 ms 00:26:14.235 [2024-11-25 20:44:22.313567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.333951] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:14.235 [2024-11-25 20:44:22.333990] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:14.235 [2024-11-25 20:44:22.334007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.334018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:14.235 [2024-11-25 20:44:22.334030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.310 ms 00:26:14.235 [2024-11-25 20:44:22.334041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-11-25 20:44:22.364266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-11-25 20:44:22.364308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:14.235 [2024-11-25 20:44:22.364323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.187 ms 00:26:14.235 [2024-11-25 20:44:22.364343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.382394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.382432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:14.494 [2024-11-25 20:44:22.382446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.990 ms 00:26:14.494 [2024-11-25 20:44:22.382472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.399913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.399946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:14.494 [2024-11-25 20:44:22.399958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:26:14.494 [2024-11-25 20:44:22.399968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.400764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.400797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:14.494 [2024-11-25 20:44:22.400810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:26:14.494 [2024-11-25 20:44:22.400821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.495537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.495633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:14.494 [2024-11-25 20:44:22.495669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.833 ms 00:26:14.494 [2024-11-25 20:44:22.495680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.506807] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:14.494 [2024-11-25 20:44:22.532957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.533011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:14.494 [2024-11-25 20:44:22.533035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.234 ms 00:26:14.494 [2024-11-25 20:44:22.533046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.533205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.533220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:14.494 [2024-11-25 20:44:22.533233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:14.494 [2024-11-25 20:44:22.533243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.533360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.533395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:14.494 [2024-11-25 20:44:22.533413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:14.494 [2024-11-25 20:44:22.533429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.533465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.533477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:14.494 [2024-11-25 20:44:22.533488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:14.494 [2024-11-25 20:44:22.533499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.533541] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:14.494 [2024-11-25 20:44:22.533554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.533565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:14.494 [2024-11-25 20:44:22.533575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:14.494 [2024-11-25 20:44:22.533586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.572621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-11-25 20:44:22.572665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:14.494 [2024-11-25 20:44:22.572681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.057 ms 00:26:14.494 [2024-11-25 20:44:22.572693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-11-25 20:44:22.572821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.495 [2024-11-25 20:44:22.572837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:14.495 [2024-11-25 20:44:22.572849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:14.495 [2024-11-25 20:44:22.572864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.495 [2024-11-25 20:44:22.574261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:14.495 [2024-11-25 20:44:22.578664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 462.816 ms, result 0 00:26:14.495 [2024-11-25 20:44:22.579610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:14.495 [2024-11-25 20:44:22.598520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:15.872  [2024-11-25T20:44:24.945Z] Copying: 26/256 [MB] (26 MBps) [2024-11-25T20:44:25.881Z] Copying: 51/256 [MB] (24 MBps) [2024-11-25T20:44:26.817Z] Copying: 75/256 [MB] (24 MBps) [2024-11-25T20:44:27.754Z] Copying: 100/256 [MB] (24 MBps) [2024-11-25T20:44:28.771Z] Copying: 125/256 [MB] (24 MBps) [2024-11-25T20:44:29.735Z] Copying: 149/256 [MB] (24 MBps) [2024-11-25T20:44:30.673Z] Copying: 174/256 [MB] (24 MBps) [2024-11-25T20:44:32.052Z] Copying: 199/256 [MB] (25 MBps) [2024-11-25T20:44:32.990Z] Copying: 224/256 [MB] (25 MBps) [2024-11-25T20:44:32.990Z] Copying: 249/256 [MB] (24 MBps) [2024-11-25T20:44:33.250Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-25 20:44:33.214747] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:25.114 [2024-11-25 20:44:33.233143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.114 [2024-11-25 20:44:33.233192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:25.114 [2024-11-25 20:44:33.233220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:25.114 [2024-11-25 20:44:33.233232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.114 [2024-11-25 20:44:33.233265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:25.114 [2024-11-25 20:44:33.238157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.114 [2024-11-25 20:44:33.238197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:25.114 [2024-11-25 20:44:33.238211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:26:25.114 [2024-11-25 20:44:33.238222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.114 [2024-11-25 20:44:33.238519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.114 [2024-11-25 20:44:33.238534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:25.114 [2024-11-25 20:44:33.238546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:26:25.114 [2024-11-25 20:44:33.238556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.114 [2024-11-25 20:44:33.241449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.114 [2024-11-25 20:44:33.241474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:25.114 [2024-11-25 20:44:33.241487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:26:25.114 [2024-11-25 20:44:33.241499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.374 [2024-11-25 20:44:33.247349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.374 [2024-11-25 20:44:33.247393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:25.374 [2024-11-25 20:44:33.247407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.833 ms 00:26:25.374 [2024-11-25 20:44:33.247418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.374 [2024-11-25 20:44:33.284747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.374 [2024-11-25 20:44:33.284786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:25.375 [2024-11-25 20:44:33.284817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.289 ms 00:26:25.375 [2024-11-25 20:44:33.284828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.305916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.375 [2024-11-25 20:44:33.305962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:25.375 [2024-11-25 20:44:33.305992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.076 ms 00:26:25.375 [2024-11-25 20:44:33.306003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.306161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.375 [2024-11-25 20:44:33.306175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:25.375 [2024-11-25 20:44:33.306201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:26:25.375 [2024-11-25 20:44:33.306211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.342351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.375 [2024-11-25 20:44:33.342390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:25.375 [2024-11-25 20:44:33.342403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.179 ms 00:26:25.375 [2024-11-25 20:44:33.342414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.377841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.375 [2024-11-25 20:44:33.377880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:25.375 [2024-11-25 20:44:33.377894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.427 ms 00:26:25.375 [2024-11-25 20:44:33.377903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.413160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.375 [2024-11-25 20:44:33.413207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.375 [2024-11-25 20:44:33.413237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.271 ms 00:26:25.375 [2024-11-25 20:44:33.413247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.448867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.375 [2024-11-25 20:44:33.448904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.375 [2024-11-25 20:44:33.448933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.602 ms 00:26:25.375 [2024-11-25 20:44:33.448943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.375 [2024-11-25 20:44:33.448986] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.375 [2024-11-25 20:44:33.449004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.375 [2024-11-25 20:44:33.449526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.449990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.376 [2024-11-25 20:44:33.450146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.376 [2024-11-25 20:44:33.450157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b0a869f-e1da-4642-8cb8-2fcd9c0e6bb6 00:26:25.376 [2024-11-25 20:44:33.450168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:25.376 [2024-11-25 20:44:33.450178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:25.376 [2024-11-25 20:44:33.450188] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:25.376 [2024-11-25 20:44:33.450199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:25.376 [2024-11-25 20:44:33.450209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.376 [2024-11-25 20:44:33.450221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.376 [2024-11-25 20:44:33.450236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.376 [2024-11-25 20:44:33.450246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.376 [2024-11-25 20:44:33.450255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.376 [2024-11-25 20:44:33.450265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.376 [2024-11-25 20:44:33.450276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.376 [2024-11-25 20:44:33.450287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:26:25.376 [2024-11-25 20:44:33.450297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.376 [2024-11-25 20:44:33.471424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.376 [2024-11-25 20:44:33.471457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.376 [2024-11-25 20:44:33.471471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.138 ms 00:26:25.376 [2024-11-25 20:44:33.471487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.377 [2024-11-25 20:44:33.472134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.377 [2024-11-25 20:44:33.472171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.377 [2024-11-25 20:44:33.472183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:26:25.377 [2024-11-25 20:44:33.472193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.636 [2024-11-25 20:44:33.530472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.636 [2024-11-25 20:44:33.530512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.636 [2024-11-25 20:44:33.530542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.636 [2024-11-25 20:44:33.530559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.636 [2024-11-25 20:44:33.530655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.636 [2024-11-25 20:44:33.530667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.636 [2024-11-25 20:44:33.530679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.636 [2024-11-25 20:44:33.530689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.636 [2024-11-25 20:44:33.530748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.636 [2024-11-25 20:44:33.530772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.636 [2024-11-25 20:44:33.530783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.636 [2024-11-25 20:44:33.530793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.636 [2024-11-25 20:44:33.530819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.636 [2024-11-25 20:44:33.530830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.636 [2024-11-25 20:44:33.530841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.636 [2024-11-25 20:44:33.530850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.636 [2024-11-25 20:44:33.666994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.636 [2024-11-25 20:44:33.667088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.636 [2024-11-25 20:44:33.667106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.636 [2024-11-25 20:44:33.667125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.773533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.773628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.895 [2024-11-25 20:44:33.773647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.773659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.773783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.773796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.895 [2024-11-25 20:44:33.773808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.773818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.773852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.773869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.895 [2024-11-25 20:44:33.773880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.773891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.774023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.774037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.895 [2024-11-25 20:44:33.774049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.774060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.774101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.774119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.895 [2024-11-25 20:44:33.774130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.774141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.774192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.774204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.895 [2024-11-25 20:44:33.774215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.774226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.774279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.895 [2024-11-25 20:44:33.774296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.895 [2024-11-25 20:44:33.774307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.895 [2024-11-25 20:44:33.774317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.895 [2024-11-25 20:44:33.774506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.244 ms, result 0 00:26:26.832 00:26:26.832 00:26:26.832 20:44:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:27.401 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:27.401 20:44:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79129 00:26:27.401 20:44:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79129 ']' 00:26:27.401 20:44:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79129 00:26:27.401 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79129) - No such process 00:26:27.401 Process with pid 79129 is not found 00:26:27.401 20:44:35 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79129 is not found' 00:26:27.401 00:26:27.401 real 1m13.781s 00:26:27.401 user 1m40.018s 00:26:27.401 sys 0m8.301s 00:26:27.401 20:44:35 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.401 ************************************ 00:26:27.401 END TEST ftl_trim 00:26:27.401 ************************************ 00:26:27.401 20:44:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:27.661 20:44:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:27.661 20:44:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:27.661 20:44:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.661 20:44:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:27.661 ************************************ 00:26:27.661 START TEST ftl_restore 00:26:27.661 ************************************ 00:26:27.661 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:27.661 * Looking for test storage... 00:26:27.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:27.661 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:27.662 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:26:27.662 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:27.662 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.662 20:44:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.922 20:44:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.922 --rc genhtml_branch_coverage=1 00:26:27.922 --rc genhtml_function_coverage=1 00:26:27.922 --rc genhtml_legend=1 00:26:27.922 --rc geninfo_all_blocks=1 00:26:27.922 --rc geninfo_unexecuted_blocks=1 00:26:27.922 00:26:27.922 ' 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.922 --rc genhtml_branch_coverage=1 00:26:27.922 --rc genhtml_function_coverage=1 00:26:27.922 --rc genhtml_legend=1 00:26:27.922 --rc geninfo_all_blocks=1 00:26:27.922 --rc geninfo_unexecuted_blocks=1 00:26:27.922 00:26:27.922 ' 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.922 --rc genhtml_branch_coverage=1 00:26:27.922 --rc genhtml_function_coverage=1 00:26:27.922 --rc genhtml_legend=1 00:26:27.922 --rc geninfo_all_blocks=1 00:26:27.922 --rc geninfo_unexecuted_blocks=1 00:26:27.922 00:26:27.922 ' 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:27.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.922 --rc genhtml_branch_coverage=1 00:26:27.922 --rc genhtml_function_coverage=1 00:26:27.922 --rc genhtml_legend=1 00:26:27.922 --rc geninfo_all_blocks=1 00:26:27.922 --rc geninfo_unexecuted_blocks=1 00:26:27.922 00:26:27.922 ' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NwJWzfU5tS 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79414 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:27.922 20:44:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79414 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79414 ']' 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.922 20:44:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:27.922 [2024-11-25 20:44:35.970198] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:26:27.922 [2024-11-25 20:44:35.970322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79414 ] 00:26:28.181 [2024-11-25 20:44:36.151111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.182 [2024-11-25 20:44:36.286323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:29.558 20:44:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:29.558 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:29.817 { 00:26:29.817 "name": "nvme0n1", 00:26:29.817 "aliases": [ 00:26:29.817 "c96e4930-630e-48b8-be2e-dd8865dd8746" 00:26:29.817 ], 00:26:29.817 "product_name": "NVMe disk", 00:26:29.817 "block_size": 4096, 00:26:29.817 "num_blocks": 1310720, 00:26:29.817 "uuid": "c96e4930-630e-48b8-be2e-dd8865dd8746", 00:26:29.817 "numa_id": -1, 00:26:29.817 "assigned_rate_limits": { 00:26:29.817 "rw_ios_per_sec": 0, 00:26:29.817 "rw_mbytes_per_sec": 0, 00:26:29.817 "r_mbytes_per_sec": 0, 00:26:29.817 "w_mbytes_per_sec": 0 00:26:29.817 }, 00:26:29.817 "claimed": true, 00:26:29.817 "claim_type": "read_many_write_one", 00:26:29.817 "zoned": false, 00:26:29.817 "supported_io_types": { 00:26:29.817 "read": true, 00:26:29.817 "write": true, 00:26:29.817 "unmap": true, 00:26:29.817 "flush": true, 00:26:29.817 "reset": true, 00:26:29.817 "nvme_admin": true, 00:26:29.817 "nvme_io": true, 00:26:29.817 "nvme_io_md": false, 00:26:29.817 "write_zeroes": true, 00:26:29.817 "zcopy": false, 00:26:29.817 "get_zone_info": false, 00:26:29.817 "zone_management": false, 00:26:29.817 "zone_append": false, 00:26:29.817 "compare": true, 00:26:29.817 "compare_and_write": false, 00:26:29.817 "abort": true, 00:26:29.817 "seek_hole": false, 00:26:29.817 "seek_data": false, 00:26:29.817 "copy": true, 00:26:29.817 "nvme_iov_md": false 00:26:29.817 }, 00:26:29.817 "driver_specific": { 00:26:29.817 "nvme": [ 00:26:29.817 { 00:26:29.817 "pci_address": "0000:00:11.0", 00:26:29.817 "trid": { 00:26:29.817 "trtype": "PCIe", 00:26:29.817 "traddr": "0000:00:11.0" 00:26:29.817 }, 00:26:29.817 "ctrlr_data": { 00:26:29.817 "cntlid": 0, 00:26:29.817 "vendor_id": "0x1b36", 00:26:29.817 "model_number": "QEMU NVMe Ctrl", 00:26:29.817 "serial_number": "12341", 00:26:29.817 "firmware_revision": "8.0.0", 00:26:29.817 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:29.817 "oacs": { 00:26:29.817 "security": 0, 00:26:29.817 "format": 1, 00:26:29.817 "firmware": 0, 00:26:29.817 "ns_manage": 1 00:26:29.817 }, 00:26:29.817 "multi_ctrlr": false, 00:26:29.817 "ana_reporting": false 00:26:29.817 }, 00:26:29.817 "vs": { 00:26:29.817 "nvme_version": "1.4" 00:26:29.817 }, 00:26:29.817 "ns_data": { 00:26:29.817 "id": 1, 00:26:29.817 "can_share": false 00:26:29.817 } 00:26:29.817 } 00:26:29.817 ], 00:26:29.817 "mp_policy": "active_passive" 00:26:29.817 } 00:26:29.817 } 00:26:29.817 ]' 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:29.817 20:44:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:26:29.817 20:44:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:29.817 20:44:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:29.817 20:44:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:29.818 20:44:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:29.818 20:44:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:30.076 20:44:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=922045c8-773e-4c53-a06c-f6742f30badb 00:26:30.076 20:44:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:30.076 20:44:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 922045c8-773e-4c53-a06c-f6742f30badb 00:26:30.335 20:44:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:30.594 20:44:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=e39689e2-2a05-4b5f-bc52-e5c1f53e60f4 00:26:30.594 20:44:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e39689e2-2a05-4b5f-bc52-e5c1f53e60f4 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:30.854 20:44:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:30.854 { 00:26:30.854 "name": "705d77ca-a318-4797-ae6d-4b9dae3cea94", 00:26:30.854 "aliases": [ 00:26:30.854 "lvs/nvme0n1p0" 00:26:30.854 ], 00:26:30.854 "product_name": "Logical Volume", 00:26:30.854 "block_size": 4096, 00:26:30.854 "num_blocks": 26476544, 00:26:30.854 "uuid": "705d77ca-a318-4797-ae6d-4b9dae3cea94", 00:26:30.854 "assigned_rate_limits": { 00:26:30.854 "rw_ios_per_sec": 0, 00:26:30.854 "rw_mbytes_per_sec": 0, 00:26:30.854 "r_mbytes_per_sec": 0, 00:26:30.854 "w_mbytes_per_sec": 0 00:26:30.854 }, 00:26:30.854 "claimed": false, 00:26:30.854 "zoned": false, 00:26:30.854 "supported_io_types": { 00:26:30.854 "read": true, 00:26:30.854 "write": true, 00:26:30.854 "unmap": true, 00:26:30.854 "flush": false, 00:26:30.854 "reset": true, 00:26:30.854 "nvme_admin": false, 00:26:30.854 "nvme_io": false, 00:26:30.854 "nvme_io_md": false, 00:26:30.854 "write_zeroes": true, 00:26:30.854 "zcopy": false, 00:26:30.854 "get_zone_info": false, 00:26:30.854 "zone_management": false, 00:26:30.854 "zone_append": false, 00:26:30.854 "compare": false, 00:26:30.854 "compare_and_write": false, 00:26:30.854 "abort": false, 00:26:30.854 "seek_hole": true, 00:26:30.854 "seek_data": true, 00:26:30.854 "copy": false, 00:26:30.854 "nvme_iov_md": false 00:26:30.854 }, 00:26:30.854 "driver_specific": { 00:26:30.854 "lvol": { 00:26:30.854 "lvol_store_uuid": "e39689e2-2a05-4b5f-bc52-e5c1f53e60f4", 00:26:30.854 "base_bdev": "nvme0n1", 00:26:30.854 "thin_provision": true, 00:26:30.854 "num_allocated_clusters": 0, 00:26:30.854 "snapshot": false, 00:26:30.854 "clone": false, 00:26:30.854 "esnap_clone": false 00:26:30.854 } 00:26:30.854 } 00:26:30.854 } 00:26:30.854 ]' 00:26:30.854 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:31.113 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:31.113 20:44:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:31.113 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:31.113 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:31.113 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:31.113 20:44:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:31.113 20:44:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:31.113 20:44:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:31.384 20:44:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:31.384 20:44:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:31.384 20:44:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:31.384 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:31.384 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:31.384 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:31.384 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:31.384 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:31.384 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:31.384 { 00:26:31.384 "name": "705d77ca-a318-4797-ae6d-4b9dae3cea94", 00:26:31.384 "aliases": [ 00:26:31.384 "lvs/nvme0n1p0" 00:26:31.384 ], 00:26:31.384 "product_name": "Logical Volume", 00:26:31.384 "block_size": 4096, 00:26:31.384 "num_blocks": 26476544, 00:26:31.384 "uuid": "705d77ca-a318-4797-ae6d-4b9dae3cea94", 00:26:31.384 "assigned_rate_limits": { 00:26:31.384 "rw_ios_per_sec": 0, 00:26:31.384 "rw_mbytes_per_sec": 0, 00:26:31.384 "r_mbytes_per_sec": 0, 00:26:31.384 "w_mbytes_per_sec": 0 00:26:31.384 }, 00:26:31.384 "claimed": false, 00:26:31.384 "zoned": false, 00:26:31.384 "supported_io_types": { 00:26:31.384 "read": true, 00:26:31.384 "write": true, 00:26:31.384 "unmap": true, 00:26:31.384 "flush": false, 00:26:31.384 "reset": true, 00:26:31.385 "nvme_admin": false, 00:26:31.385 "nvme_io": false, 00:26:31.385 "nvme_io_md": false, 00:26:31.385 "write_zeroes": true, 00:26:31.385 "zcopy": false, 00:26:31.385 "get_zone_info": false, 00:26:31.385 "zone_management": false, 00:26:31.385 "zone_append": false, 00:26:31.385 "compare": false, 00:26:31.385 "compare_and_write": false, 00:26:31.385 "abort": false, 00:26:31.385 "seek_hole": true, 00:26:31.385 "seek_data": true, 00:26:31.385 "copy": false, 00:26:31.385 "nvme_iov_md": false 00:26:31.385 }, 00:26:31.385 "driver_specific": { 00:26:31.385 "lvol": { 00:26:31.385 "lvol_store_uuid": "e39689e2-2a05-4b5f-bc52-e5c1f53e60f4", 00:26:31.385 "base_bdev": "nvme0n1", 00:26:31.385 "thin_provision": true, 00:26:31.385 "num_allocated_clusters": 0, 00:26:31.385 "snapshot": false, 00:26:31.385 "clone": false, 00:26:31.385 "esnap_clone": false 00:26:31.385 } 00:26:31.385 } 00:26:31.385 } 00:26:31.385 ]' 00:26:31.385 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:31.643 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:31.643 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:31.643 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:31.643 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:31.643 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:31.643 20:44:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:31.643 20:44:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:31.902 20:44:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:31.902 20:44:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:31.902 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:31.902 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:31.902 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:31.902 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:31.902 20:44:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 705d77ca-a318-4797-ae6d-4b9dae3cea94 00:26:31.902 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:31.902 { 00:26:31.902 "name": "705d77ca-a318-4797-ae6d-4b9dae3cea94", 00:26:31.902 "aliases": [ 00:26:31.902 "lvs/nvme0n1p0" 00:26:31.902 ], 00:26:31.902 "product_name": "Logical Volume", 00:26:31.902 "block_size": 4096, 00:26:31.902 "num_blocks": 26476544, 00:26:31.902 "uuid": "705d77ca-a318-4797-ae6d-4b9dae3cea94", 00:26:31.902 "assigned_rate_limits": { 00:26:31.902 "rw_ios_per_sec": 0, 00:26:31.902 "rw_mbytes_per_sec": 0, 00:26:31.902 "r_mbytes_per_sec": 0, 00:26:31.902 "w_mbytes_per_sec": 0 00:26:31.902 }, 00:26:31.902 "claimed": false, 00:26:31.902 "zoned": false, 00:26:31.902 "supported_io_types": { 00:26:31.902 "read": true, 00:26:31.902 "write": true, 00:26:31.902 "unmap": true, 00:26:31.902 "flush": false, 00:26:31.902 "reset": true, 00:26:31.902 "nvme_admin": false, 00:26:31.902 "nvme_io": false, 00:26:31.902 "nvme_io_md": false, 00:26:31.903 "write_zeroes": true, 00:26:31.903 "zcopy": false, 00:26:31.903 "get_zone_info": false, 00:26:31.903 "zone_management": false, 00:26:31.903 "zone_append": false, 00:26:31.903 "compare": false, 00:26:31.903 "compare_and_write": false, 00:26:31.903 "abort": false, 00:26:31.903 "seek_hole": true, 00:26:31.903 "seek_data": true, 00:26:31.903 "copy": false, 00:26:31.903 "nvme_iov_md": false 00:26:31.903 }, 00:26:31.903 "driver_specific": { 00:26:31.903 "lvol": { 00:26:31.903 "lvol_store_uuid": "e39689e2-2a05-4b5f-bc52-e5c1f53e60f4", 00:26:31.903 "base_bdev": "nvme0n1", 00:26:31.903 "thin_provision": true, 00:26:31.903 "num_allocated_clusters": 0, 00:26:31.903 "snapshot": false, 00:26:31.903 "clone": false, 00:26:31.903 "esnap_clone": false 00:26:31.903 } 00:26:31.903 } 00:26:31.903 } 00:26:31.903 ]' 00:26:31.903 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:32.161 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:32.161 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:32.161 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:32.161 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:32.161 20:44:40 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 705d77ca-a318-4797-ae6d-4b9dae3cea94 --l2p_dram_limit 10' 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:32.161 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:32.161 20:44:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 705d77ca-a318-4797-ae6d-4b9dae3cea94 --l2p_dram_limit 10 -c nvc0n1p0 00:26:32.421 [2024-11-25 20:44:40.298559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.298619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:32.421 [2024-11-25 20:44:40.298642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:32.421 [2024-11-25 20:44:40.298654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.298741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.298753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.421 [2024-11-25 20:44:40.298768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:32.421 [2024-11-25 20:44:40.298778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.298813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:32.421 [2024-11-25 20:44:40.299922] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:32.421 [2024-11-25 20:44:40.299967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.299979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.421 [2024-11-25 20:44:40.299994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:26:32.421 [2024-11-25 20:44:40.300005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.300099] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 784b2adc-fa29-40f2-b6cc-6376a96317b8 00:26:32.421 [2024-11-25 20:44:40.302507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.302546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:32.421 [2024-11-25 20:44:40.302559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:32.421 [2024-11-25 20:44:40.302575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.316245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.316288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.421 [2024-11-25 20:44:40.316302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.627 ms 00:26:32.421 [2024-11-25 20:44:40.316316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.316457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.316476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.421 [2024-11-25 20:44:40.316488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:32.421 [2024-11-25 20:44:40.316507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.316574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.316590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:32.421 [2024-11-25 20:44:40.316604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:32.421 [2024-11-25 20:44:40.316618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.316648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:32.421 [2024-11-25 20:44:40.322854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.322889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.421 [2024-11-25 20:44:40.322907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:26:32.421 [2024-11-25 20:44:40.322917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.322959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.322971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:32.421 [2024-11-25 20:44:40.322985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:32.421 [2024-11-25 20:44:40.322996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.323036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:32.421 [2024-11-25 20:44:40.323178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:32.421 [2024-11-25 20:44:40.323201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:32.421 [2024-11-25 20:44:40.323216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:32.421 [2024-11-25 20:44:40.323232] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:32.421 [2024-11-25 20:44:40.323245] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:32.421 [2024-11-25 20:44:40.323260] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:32.421 [2024-11-25 20:44:40.323270] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:32.421 [2024-11-25 20:44:40.323287] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:32.421 [2024-11-25 20:44:40.323298] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:32.421 [2024-11-25 20:44:40.323312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.323351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:32.421 [2024-11-25 20:44:40.323365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:26:32.421 [2024-11-25 20:44:40.323376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.323457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.421 [2024-11-25 20:44:40.323468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:32.421 [2024-11-25 20:44:40.323481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:32.421 [2024-11-25 20:44:40.323492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.421 [2024-11-25 20:44:40.323598] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:32.422 [2024-11-25 20:44:40.323619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:32.422 [2024-11-25 20:44:40.323634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:32.422 [2024-11-25 20:44:40.323668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:32.422 [2024-11-25 20:44:40.323702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:32.422 [2024-11-25 20:44:40.323723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:32.422 [2024-11-25 20:44:40.323733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:32.422 [2024-11-25 20:44:40.323745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:32.422 [2024-11-25 20:44:40.323754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:32.422 [2024-11-25 20:44:40.323766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:32.422 [2024-11-25 20:44:40.323775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:32.422 [2024-11-25 20:44:40.323800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:32.422 [2024-11-25 20:44:40.323834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:32.422 [2024-11-25 20:44:40.323867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:32.422 [2024-11-25 20:44:40.323901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:32.422 [2024-11-25 20:44:40.323931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:32.422 [2024-11-25 20:44:40.323953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:32.422 [2024-11-25 20:44:40.323968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:32.422 [2024-11-25 20:44:40.323977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:32.422 [2024-11-25 20:44:40.323989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:32.422 [2024-11-25 20:44:40.323998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:32.422 [2024-11-25 20:44:40.324010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:32.422 [2024-11-25 20:44:40.324020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:32.422 [2024-11-25 20:44:40.324031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:32.422 [2024-11-25 20:44:40.324040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.324052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:32.422 [2024-11-25 20:44:40.324061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:32.422 [2024-11-25 20:44:40.324073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.324081] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:32.422 [2024-11-25 20:44:40.324094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:32.422 [2024-11-25 20:44:40.324104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:32.422 [2024-11-25 20:44:40.324118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:32.422 [2024-11-25 20:44:40.324129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:32.422 [2024-11-25 20:44:40.324145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:32.422 [2024-11-25 20:44:40.324154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:32.422 [2024-11-25 20:44:40.324166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:32.422 [2024-11-25 20:44:40.324175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:32.422 [2024-11-25 20:44:40.324188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:32.422 [2024-11-25 20:44:40.324204] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:32.422 [2024-11-25 20:44:40.324224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:32.422 [2024-11-25 20:44:40.324252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:32.422 [2024-11-25 20:44:40.324262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:32.422 [2024-11-25 20:44:40.324276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:32.422 [2024-11-25 20:44:40.324286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:32.422 [2024-11-25 20:44:40.324299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:32.422 [2024-11-25 20:44:40.324310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:32.422 [2024-11-25 20:44:40.324334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:32.422 [2024-11-25 20:44:40.324346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:32.422 [2024-11-25 20:44:40.324363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:32.422 [2024-11-25 20:44:40.324421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:32.422 [2024-11-25 20:44:40.324436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:32.422 [2024-11-25 20:44:40.324460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:32.422 [2024-11-25 20:44:40.324470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:32.422 [2024-11-25 20:44:40.324484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:32.422 [2024-11-25 20:44:40.324495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.422 [2024-11-25 20:44:40.324509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:32.422 [2024-11-25 20:44:40.324519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:26:32.423 [2024-11-25 20:44:40.324532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.423 [2024-11-25 20:44:40.324579] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:32.423 [2024-11-25 20:44:40.324599] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:36.617 [2024-11-25 20:44:43.909811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:43.909906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:36.617 [2024-11-25 20:44:43.909929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3591.046 ms 00:26:36.617 [2024-11-25 20:44:43.909945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:43.956914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:43.956995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.617 [2024-11-25 20:44:43.957015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.696 ms 00:26:36.617 [2024-11-25 20:44:43.957030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:43.957203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:43.957220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:36.617 [2024-11-25 20:44:43.957233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:36.617 [2024-11-25 20:44:43.957255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.012087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.012162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.617 [2024-11-25 20:44:44.012178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.852 ms 00:26:36.617 [2024-11-25 20:44:44.012193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.012250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.012265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.617 [2024-11-25 20:44:44.012277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:36.617 [2024-11-25 20:44:44.012303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.013153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.013180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.617 [2024-11-25 20:44:44.013193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:26:36.617 [2024-11-25 20:44:44.013207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.013339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.013355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.617 [2024-11-25 20:44:44.013370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:36.617 [2024-11-25 20:44:44.013388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.039567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.039624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.617 [2024-11-25 20:44:44.039641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.198 ms 00:26:36.617 [2024-11-25 20:44:44.039655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.053455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:36.617 [2024-11-25 20:44:44.058728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.058760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:36.617 [2024-11-25 20:44:44.058778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.978 ms 00:26:36.617 [2024-11-25 20:44:44.058789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.174498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.174599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:36.617 [2024-11-25 20:44:44.174624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.848 ms 00:26:36.617 [2024-11-25 20:44:44.174637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.174868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.174887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:36.617 [2024-11-25 20:44:44.174906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:26:36.617 [2024-11-25 20:44:44.174916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.211553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.617 [2024-11-25 20:44:44.211598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:36.617 [2024-11-25 20:44:44.211618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.635 ms 00:26:36.617 [2024-11-25 20:44:44.211629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.617 [2024-11-25 20:44:44.246773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.246813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:36.618 [2024-11-25 20:44:44.246831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.132 ms 00:26:36.618 [2024-11-25 20:44:44.246857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.247587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.247616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:36.618 [2024-11-25 20:44:44.247633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:26:36.618 [2024-11-25 20:44:44.247647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.350690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.350748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:36.618 [2024-11-25 20:44:44.350790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.144 ms 00:26:36.618 [2024-11-25 20:44:44.350801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.387906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.387952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:36.618 [2024-11-25 20:44:44.387987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.070 ms 00:26:36.618 [2024-11-25 20:44:44.387998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.423031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.423073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:36.618 [2024-11-25 20:44:44.423106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.039 ms 00:26:36.618 [2024-11-25 20:44:44.423117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.459139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.459182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:36.618 [2024-11-25 20:44:44.459200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.032 ms 00:26:36.618 [2024-11-25 20:44:44.459210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.459278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.459290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:36.618 [2024-11-25 20:44:44.459309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:36.618 [2024-11-25 20:44:44.459319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.459446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.618 [2024-11-25 20:44:44.459463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:36.618 [2024-11-25 20:44:44.459477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:36.618 [2024-11-25 20:44:44.459487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.618 [2024-11-25 20:44:44.460920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4168.579 ms, result 0 00:26:36.618 { 00:26:36.618 "name": "ftl0", 00:26:36.618 "uuid": "784b2adc-fa29-40f2-b6cc-6376a96317b8" 00:26:36.618 } 00:26:36.618 20:44:44 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:36.618 20:44:44 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:36.618 20:44:44 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:36.618 20:44:44 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:36.877 [2024-11-25 20:44:44.899227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.899319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:36.877 [2024-11-25 20:44:44.899338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:36.877 [2024-11-25 20:44:44.899365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.899397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:36.877 [2024-11-25 20:44:44.904011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.904046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:36.877 [2024-11-25 20:44:44.904064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:26:36.877 [2024-11-25 20:44:44.904074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.904349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.904390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:36.877 [2024-11-25 20:44:44.904406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:26:36.877 [2024-11-25 20:44:44.904415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.906958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.906983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:36.877 [2024-11-25 20:44:44.906997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:26:36.877 [2024-11-25 20:44:44.907008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.911994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.912031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:36.877 [2024-11-25 20:44:44.912050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:26:36.877 [2024-11-25 20:44:44.912060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.949036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.949077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:36.877 [2024-11-25 20:44:44.949112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.955 ms 00:26:36.877 [2024-11-25 20:44:44.949123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.971033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.971075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:36.877 [2024-11-25 20:44:44.971094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.893 ms 00:26:36.877 [2024-11-25 20:44:44.971105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:44.971319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:44.971358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:36.877 [2024-11-25 20:44:44.971372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:26:36.877 [2024-11-25 20:44:44.971384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.877 [2024-11-25 20:44:45.007842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.877 [2024-11-25 20:44:45.007881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:36.877 [2024-11-25 20:44:45.007913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.490 ms 00:26:36.877 [2024-11-25 20:44:45.007924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.138 [2024-11-25 20:44:45.043954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.138 [2024-11-25 20:44:45.043993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:37.138 [2024-11-25 20:44:45.044011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.042 ms 00:26:37.138 [2024-11-25 20:44:45.044021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.138 [2024-11-25 20:44:45.080232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.138 [2024-11-25 20:44:45.080271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:37.138 [2024-11-25 20:44:45.080289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.219 ms 00:26:37.138 [2024-11-25 20:44:45.080299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.138 [2024-11-25 20:44:45.116260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.138 [2024-11-25 20:44:45.116298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:37.138 [2024-11-25 20:44:45.116315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.882 ms 00:26:37.138 [2024-11-25 20:44:45.116332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.138 [2024-11-25 20:44:45.116394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:37.138 [2024-11-25 20:44:45.116413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.116998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:37.138 [2024-11-25 20:44:45.117425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:37.139 [2024-11-25 20:44:45.117914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:37.139 [2024-11-25 20:44:45.117928] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 784b2adc-fa29-40f2-b6cc-6376a96317b8 00:26:37.139 [2024-11-25 20:44:45.117939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:37.139 [2024-11-25 20:44:45.117955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:37.139 [2024-11-25 20:44:45.117969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:37.139 [2024-11-25 20:44:45.117983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:37.139 [2024-11-25 20:44:45.117993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:37.139 [2024-11-25 20:44:45.118006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:37.139 [2024-11-25 20:44:45.118017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:37.139 [2024-11-25 20:44:45.118029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:37.139 [2024-11-25 20:44:45.118037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:37.139 [2024-11-25 20:44:45.118050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.139 [2024-11-25 20:44:45.118061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:37.139 [2024-11-25 20:44:45.118076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:26:37.139 [2024-11-25 20:44:45.118089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.139 [2024-11-25 20:44:45.139068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.139 [2024-11-25 20:44:45.139102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:37.139 [2024-11-25 20:44:45.139119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.952 ms 00:26:37.139 [2024-11-25 20:44:45.139147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.139 [2024-11-25 20:44:45.139767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.139 [2024-11-25 20:44:45.139785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:37.139 [2024-11-25 20:44:45.139804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:26:37.139 [2024-11-25 20:44:45.139815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.139 [2024-11-25 20:44:45.209491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.139 [2024-11-25 20:44:45.209536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.139 [2024-11-25 20:44:45.209554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.139 [2024-11-25 20:44:45.209581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.139 [2024-11-25 20:44:45.209667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.139 [2024-11-25 20:44:45.209679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.139 [2024-11-25 20:44:45.209698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.139 [2024-11-25 20:44:45.209709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.139 [2024-11-25 20:44:45.209832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.139 [2024-11-25 20:44:45.209846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.139 [2024-11-25 20:44:45.209860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.139 [2024-11-25 20:44:45.209870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.139 [2024-11-25 20:44:45.209898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.139 [2024-11-25 20:44:45.209910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.139 [2024-11-25 20:44:45.209923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.139 [2024-11-25 20:44:45.209937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.346000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.346061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.399 [2024-11-25 20:44:45.346081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.346109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.449670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.449754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.399 [2024-11-25 20:44:45.449775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.449790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.449954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.449969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.399 [2024-11-25 20:44:45.449982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.449993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.450061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.450073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.399 [2024-11-25 20:44:45.450086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.450112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.450249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.450263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.399 [2024-11-25 20:44:45.450276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.450287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.450334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.450365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:37.399 [2024-11-25 20:44:45.450380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.450390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.450447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.450459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.399 [2024-11-25 20:44:45.450473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.450483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.450545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.399 [2024-11-25 20:44:45.450561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.399 [2024-11-25 20:44:45.450575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.399 [2024-11-25 20:44:45.450586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.399 [2024-11-25 20:44:45.450747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.374 ms, result 0 00:26:37.399 true 00:26:37.399 20:44:45 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79414 00:26:37.399 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79414 ']' 00:26:37.399 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79414 00:26:37.399 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:26:37.399 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.399 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79414 00:26:37.399 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.658 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.658 killing process with pid 79414 00:26:37.658 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79414' 00:26:37.658 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79414 00:26:37.658 20:44:45 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79414 00:26:40.193 20:44:48 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:44.387 262144+0 records in 00:26:44.387 262144+0 records out 00:26:44.387 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.08058 s, 263 MB/s 00:26:44.387 20:44:52 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:45.790 20:44:53 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:46.049 [2024-11-25 20:44:53.978914] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:26:46.049 [2024-11-25 20:44:53.979726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79656 ] 00:26:46.049 [2024-11-25 20:44:54.163694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.309 [2024-11-25 20:44:54.301686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.879 [2024-11-25 20:44:54.721189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:46.879 [2024-11-25 20:44:54.721261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:46.879 [2024-11-25 20:44:54.887515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.887598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:46.879 [2024-11-25 20:44:54.887616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:46.879 [2024-11-25 20:44:54.887626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.887675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.887692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.879 [2024-11-25 20:44:54.887703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:46.879 [2024-11-25 20:44:54.887713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.887737] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:46.879 [2024-11-25 20:44:54.888757] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:46.879 [2024-11-25 20:44:54.888786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.888798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.879 [2024-11-25 20:44:54.888809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:26:46.879 [2024-11-25 20:44:54.888820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.891320] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:46.879 [2024-11-25 20:44:54.911275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.911314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:46.879 [2024-11-25 20:44:54.911336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.988 ms 00:26:46.879 [2024-11-25 20:44:54.911347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.911431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.911445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:46.879 [2024-11-25 20:44:54.911456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:46.879 [2024-11-25 20:44:54.911466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.924172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.924200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.879 [2024-11-25 20:44:54.924213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.643 ms 00:26:46.879 [2024-11-25 20:44:54.924228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.924331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.924356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.879 [2024-11-25 20:44:54.924367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:46.879 [2024-11-25 20:44:54.924378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.924436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.924449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:46.879 [2024-11-25 20:44:54.924459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:46.879 [2024-11-25 20:44:54.924469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.924501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:46.879 [2024-11-25 20:44:54.930206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.930240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.879 [2024-11-25 20:44:54.930257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.721 ms 00:26:46.879 [2024-11-25 20:44:54.930268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.930301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.930312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:46.879 [2024-11-25 20:44:54.930336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:46.879 [2024-11-25 20:44:54.930348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.930386] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:46.879 [2024-11-25 20:44:54.930413] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:46.879 [2024-11-25 20:44:54.930451] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:46.879 [2024-11-25 20:44:54.930474] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:46.879 [2024-11-25 20:44:54.930569] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:46.879 [2024-11-25 20:44:54.930583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:46.879 [2024-11-25 20:44:54.930597] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:46.879 [2024-11-25 20:44:54.930611] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:46.879 [2024-11-25 20:44:54.930623] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:46.879 [2024-11-25 20:44:54.930635] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:46.879 [2024-11-25 20:44:54.930647] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:46.879 [2024-11-25 20:44:54.930657] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:46.879 [2024-11-25 20:44:54.930671] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:46.879 [2024-11-25 20:44:54.930682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.930693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:46.879 [2024-11-25 20:44:54.930704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:26:46.879 [2024-11-25 20:44:54.930714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.930785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.879 [2024-11-25 20:44:54.930796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:46.879 [2024-11-25 20:44:54.930806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:46.879 [2024-11-25 20:44:54.930816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.879 [2024-11-25 20:44:54.930919] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:46.879 [2024-11-25 20:44:54.930940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:46.879 [2024-11-25 20:44:54.930951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.879 [2024-11-25 20:44:54.930962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.879 [2024-11-25 20:44:54.930972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:46.879 [2024-11-25 20:44:54.930982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:46.879 [2024-11-25 20:44:54.930993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:46.879 [2024-11-25 20:44:54.931003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:46.879 [2024-11-25 20:44:54.931013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.879 [2024-11-25 20:44:54.931033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:46.879 [2024-11-25 20:44:54.931043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:46.879 [2024-11-25 20:44:54.931052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:46.879 [2024-11-25 20:44:54.931073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:46.879 [2024-11-25 20:44:54.931083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:46.879 [2024-11-25 20:44:54.931093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:46.879 [2024-11-25 20:44:54.931112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:46.879 [2024-11-25 20:44:54.931122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:46.879 [2024-11-25 20:44:54.931141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.879 [2024-11-25 20:44:54.931160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:46.879 [2024-11-25 20:44:54.931169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.879 [2024-11-25 20:44:54.931188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:46.879 [2024-11-25 20:44:54.931197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.879 [2024-11-25 20:44:54.931214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:46.879 [2024-11-25 20:44:54.931224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:46.879 [2024-11-25 20:44:54.931242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:46.879 [2024-11-25 20:44:54.931251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:46.879 [2024-11-25 20:44:54.931260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.879 [2024-11-25 20:44:54.931269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:46.879 [2024-11-25 20:44:54.931278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:46.880 [2024-11-25 20:44:54.931287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:46.880 [2024-11-25 20:44:54.931296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:46.880 [2024-11-25 20:44:54.931306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:46.880 [2024-11-25 20:44:54.931315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.880 [2024-11-25 20:44:54.931336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:46.880 [2024-11-25 20:44:54.931347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:46.880 [2024-11-25 20:44:54.931356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.880 [2024-11-25 20:44:54.931365] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:46.880 [2024-11-25 20:44:54.931376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:46.880 [2024-11-25 20:44:54.931386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:46.880 [2024-11-25 20:44:54.931397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:46.880 [2024-11-25 20:44:54.931407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:46.880 [2024-11-25 20:44:54.931418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:46.880 [2024-11-25 20:44:54.931427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:46.880 [2024-11-25 20:44:54.931437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:46.880 [2024-11-25 20:44:54.931446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:46.880 [2024-11-25 20:44:54.931456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:46.880 [2024-11-25 20:44:54.931467] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:46.880 [2024-11-25 20:44:54.931480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:46.880 [2024-11-25 20:44:54.931506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:46.880 [2024-11-25 20:44:54.931517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:46.880 [2024-11-25 20:44:54.931528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:46.880 [2024-11-25 20:44:54.931539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:46.880 [2024-11-25 20:44:54.931549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:46.880 [2024-11-25 20:44:54.931560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:46.880 [2024-11-25 20:44:54.931571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:46.880 [2024-11-25 20:44:54.931581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:46.880 [2024-11-25 20:44:54.931592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:46.880 [2024-11-25 20:44:54.931644] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:46.880 [2024-11-25 20:44:54.931656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:46.880 [2024-11-25 20:44:54.931678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:46.880 [2024-11-25 20:44:54.931688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:46.880 [2024-11-25 20:44:54.931698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:46.880 [2024-11-25 20:44:54.931708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.880 [2024-11-25 20:44:54.931719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:46.880 [2024-11-25 20:44:54.931729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:26:46.880 [2024-11-25 20:44:54.931739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.880 [2024-11-25 20:44:54.982368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.880 [2024-11-25 20:44:54.982407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.880 [2024-11-25 20:44:54.982438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.658 ms 00:26:46.880 [2024-11-25 20:44:54.982455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.880 [2024-11-25 20:44:54.982537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.880 [2024-11-25 20:44:54.982549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:46.880 [2024-11-25 20:44:54.982560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:46.880 [2024-11-25 20:44:54.982570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.047391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.047428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.140 [2024-11-25 20:44:55.047442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.842 ms 00:26:47.140 [2024-11-25 20:44:55.047453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.047508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.047535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.140 [2024-11-25 20:44:55.047547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:47.140 [2024-11-25 20:44:55.047557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.048407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.048430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.140 [2024-11-25 20:44:55.048442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:26:47.140 [2024-11-25 20:44:55.048452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.048585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.048600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.140 [2024-11-25 20:44:55.048618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:26:47.140 [2024-11-25 20:44:55.048628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.071701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.071735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.140 [2024-11-25 20:44:55.071769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.087 ms 00:26:47.140 [2024-11-25 20:44:55.071781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.091302] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:47.140 [2024-11-25 20:44:55.091368] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:47.140 [2024-11-25 20:44:55.091386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.091398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:47.140 [2024-11-25 20:44:55.091410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.528 ms 00:26:47.140 [2024-11-25 20:44:55.091421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.120569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.120612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:47.140 [2024-11-25 20:44:55.120642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.154 ms 00:26:47.140 [2024-11-25 20:44:55.120653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.138022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.138058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:47.140 [2024-11-25 20:44:55.138087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.348 ms 00:26:47.140 [2024-11-25 20:44:55.138097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.155809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.155843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:47.140 [2024-11-25 20:44:55.155855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.701 ms 00:26:47.140 [2024-11-25 20:44:55.155864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.156605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.156631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:47.140 [2024-11-25 20:44:55.156644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:26:47.140 [2024-11-25 20:44:55.156658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.250782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.250861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:47.140 [2024-11-25 20:44:55.250880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.252 ms 00:26:47.140 [2024-11-25 20:44:55.250914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.261295] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:47.140 [2024-11-25 20:44:55.264408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.264441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:47.140 [2024-11-25 20:44:55.264455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.467 ms 00:26:47.140 [2024-11-25 20:44:55.264466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.264563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.264578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:47.140 [2024-11-25 20:44:55.264589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:47.140 [2024-11-25 20:44:55.264599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.264692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.264706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:47.140 [2024-11-25 20:44:55.264717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:47.140 [2024-11-25 20:44:55.264727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.264751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.264761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:47.140 [2024-11-25 20:44:55.264772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.140 [2024-11-25 20:44:55.264798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.140 [2024-11-25 20:44:55.264841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:47.140 [2024-11-25 20:44:55.264858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.140 [2024-11-25 20:44:55.264868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:47.140 [2024-11-25 20:44:55.264879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:47.140 [2024-11-25 20:44:55.264889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.399 [2024-11-25 20:44:55.301665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.399 [2024-11-25 20:44:55.301721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:47.400 [2024-11-25 20:44:55.301736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.814 ms 00:26:47.400 [2024-11-25 20:44:55.301747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.400 [2024-11-25 20:44:55.301856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.400 [2024-11-25 20:44:55.301869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:47.400 [2024-11-25 20:44:55.301881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:47.400 [2024-11-25 20:44:55.301891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.400 [2024-11-25 20:44:55.303424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.025 ms, result 0 00:26:48.338  [2024-11-25T20:44:57.412Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-25T20:44:58.349Z] Copying: 49/1024 [MB] (25 MBps) [2024-11-25T20:44:59.730Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-25T20:45:00.668Z] Copying: 100/1024 [MB] (25 MBps) [2024-11-25T20:45:01.607Z] Copying: 125/1024 [MB] (25 MBps) [2024-11-25T20:45:02.542Z] Copying: 151/1024 [MB] (25 MBps) [2024-11-25T20:45:03.479Z] Copying: 177/1024 [MB] (25 MBps) [2024-11-25T20:45:04.417Z] Copying: 203/1024 [MB] (25 MBps) [2024-11-25T20:45:05.354Z] Copying: 228/1024 [MB] (25 MBps) [2024-11-25T20:45:06.733Z] Copying: 253/1024 [MB] (25 MBps) [2024-11-25T20:45:07.302Z] Copying: 278/1024 [MB] (25 MBps) [2024-11-25T20:45:08.683Z] Copying: 303/1024 [MB] (25 MBps) [2024-11-25T20:45:09.622Z] Copying: 328/1024 [MB] (25 MBps) [2024-11-25T20:45:10.580Z] Copying: 352/1024 [MB] (23 MBps) [2024-11-25T20:45:11.519Z] Copying: 377/1024 [MB] (25 MBps) [2024-11-25T20:45:12.459Z] Copying: 403/1024 [MB] (25 MBps) [2024-11-25T20:45:13.399Z] Copying: 427/1024 [MB] (24 MBps) [2024-11-25T20:45:14.334Z] Copying: 453/1024 [MB] (25 MBps) [2024-11-25T20:45:15.298Z] Copying: 478/1024 [MB] (25 MBps) [2024-11-25T20:45:16.687Z] Copying: 503/1024 [MB] (25 MBps) [2024-11-25T20:45:17.293Z] Copying: 528/1024 [MB] (25 MBps) [2024-11-25T20:45:18.671Z] Copying: 552/1024 [MB] (23 MBps) [2024-11-25T20:45:19.609Z] Copying: 577/1024 [MB] (25 MBps) [2024-11-25T20:45:20.546Z] Copying: 603/1024 [MB] (25 MBps) [2024-11-25T20:45:21.484Z] Copying: 628/1024 [MB] (25 MBps) [2024-11-25T20:45:22.423Z] Copying: 653/1024 [MB] (25 MBps) [2024-11-25T20:45:23.359Z] Copying: 678/1024 [MB] (25 MBps) [2024-11-25T20:45:24.295Z] Copying: 704/1024 [MB] (25 MBps) [2024-11-25T20:45:25.674Z] Copying: 729/1024 [MB] (25 MBps) [2024-11-25T20:45:26.612Z] Copying: 753/1024 [MB] (24 MBps) [2024-11-25T20:45:27.551Z] Copying: 778/1024 [MB] (25 MBps) [2024-11-25T20:45:28.489Z] Copying: 804/1024 [MB] (25 MBps) [2024-11-25T20:45:29.425Z] Copying: 829/1024 [MB] (25 MBps) [2024-11-25T20:45:30.363Z] Copying: 854/1024 [MB] (25 MBps) [2024-11-25T20:45:31.300Z] Copying: 879/1024 [MB] (25 MBps) [2024-11-25T20:45:32.679Z] Copying: 904/1024 [MB] (25 MBps) [2024-11-25T20:45:33.617Z] Copying: 929/1024 [MB] (25 MBps) [2024-11-25T20:45:34.556Z] Copying: 954/1024 [MB] (24 MBps) [2024-11-25T20:45:35.494Z] Copying: 979/1024 [MB] (24 MBps) [2024-11-25T20:45:36.434Z] Copying: 1003/1024 [MB] (24 MBps) [2024-11-25T20:45:36.434Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-25 20:45:36.065512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.065564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:28.298 [2024-11-25 20:45:36.065583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:28.298 [2024-11-25 20:45:36.065593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.298 [2024-11-25 20:45:36.065615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:28.298 [2024-11-25 20:45:36.070304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.070345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:28.298 [2024-11-25 20:45:36.070358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:27:28.298 [2024-11-25 20:45:36.070376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.298 [2024-11-25 20:45:36.072249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.072289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:28.298 [2024-11-25 20:45:36.072302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:27:28.298 [2024-11-25 20:45:36.072313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.298 [2024-11-25 20:45:36.089511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.089550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:28.298 [2024-11-25 20:45:36.089579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.196 ms 00:27:28.298 [2024-11-25 20:45:36.089591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.298 [2024-11-25 20:45:36.094440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.094471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:28.298 [2024-11-25 20:45:36.094483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:27:28.298 [2024-11-25 20:45:36.094493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.298 [2024-11-25 20:45:36.130765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.130811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:28.298 [2024-11-25 20:45:36.130824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.242 ms 00:27:28.298 [2024-11-25 20:45:36.130834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.298 [2024-11-25 20:45:36.150750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.298 [2024-11-25 20:45:36.150785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:28.298 [2024-11-25 20:45:36.150799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.897 ms 00:27:28.298 [2024-11-25 20:45:36.150809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.299 [2024-11-25 20:45:36.150947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.299 [2024-11-25 20:45:36.150967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:28.299 [2024-11-25 20:45:36.150978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:28.299 [2024-11-25 20:45:36.150988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.299 [2024-11-25 20:45:36.186269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.299 [2024-11-25 20:45:36.186303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:28.299 [2024-11-25 20:45:36.186331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.323 ms 00:27:28.299 [2024-11-25 20:45:36.186347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.299 [2024-11-25 20:45:36.220639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.299 [2024-11-25 20:45:36.220672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:28.299 [2024-11-25 20:45:36.220683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.311 ms 00:27:28.299 [2024-11-25 20:45:36.220692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.299 [2024-11-25 20:45:36.254155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.299 [2024-11-25 20:45:36.254191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:28.299 [2024-11-25 20:45:36.254203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.466 ms 00:27:28.299 [2024-11-25 20:45:36.254212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.299 [2024-11-25 20:45:36.288507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.299 [2024-11-25 20:45:36.288539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:28.299 [2024-11-25 20:45:36.288551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.256 ms 00:27:28.299 [2024-11-25 20:45:36.288559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.299 [2024-11-25 20:45:36.288609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:28.299 [2024-11-25 20:45:36.288625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.288996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:28.299 [2024-11-25 20:45:36.289399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:28.300 [2024-11-25 20:45:36.289707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:28.300 [2024-11-25 20:45:36.289717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 784b2adc-fa29-40f2-b6cc-6376a96317b8 00:27:28.300 [2024-11-25 20:45:36.289733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:28.300 [2024-11-25 20:45:36.289743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:28.300 [2024-11-25 20:45:36.289753] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:28.300 [2024-11-25 20:45:36.289764] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:28.300 [2024-11-25 20:45:36.289773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:28.300 [2024-11-25 20:45:36.289795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:28.300 [2024-11-25 20:45:36.289804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:28.300 [2024-11-25 20:45:36.289813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:28.300 [2024-11-25 20:45:36.289822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:28.300 [2024-11-25 20:45:36.289832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.300 [2024-11-25 20:45:36.289842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:28.300 [2024-11-25 20:45:36.289852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:27:28.300 [2024-11-25 20:45:36.289862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.300 [2024-11-25 20:45:36.310118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.300 [2024-11-25 20:45:36.310151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:28.300 [2024-11-25 20:45:36.310164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.256 ms 00:27:28.300 [2024-11-25 20:45:36.310173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.300 [2024-11-25 20:45:36.310785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.300 [2024-11-25 20:45:36.310803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:28.300 [2024-11-25 20:45:36.310814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:27:28.300 [2024-11-25 20:45:36.310830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.300 [2024-11-25 20:45:36.363900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.300 [2024-11-25 20:45:36.363934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.300 [2024-11-25 20:45:36.363946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.300 [2024-11-25 20:45:36.363957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.300 [2024-11-25 20:45:36.364033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.300 [2024-11-25 20:45:36.364044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.300 [2024-11-25 20:45:36.364054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.300 [2024-11-25 20:45:36.364069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.300 [2024-11-25 20:45:36.364138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.300 [2024-11-25 20:45:36.364152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.300 [2024-11-25 20:45:36.364163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.300 [2024-11-25 20:45:36.364173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.300 [2024-11-25 20:45:36.364191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.300 [2024-11-25 20:45:36.364202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.300 [2024-11-25 20:45:36.364212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.300 [2024-11-25 20:45:36.364223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.493346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.493412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.560 [2024-11-25 20:45:36.493429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.493457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.596278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.560 [2024-11-25 20:45:36.596310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.596322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.596478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.560 [2024-11-25 20:45:36.596489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.596500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.596560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.560 [2024-11-25 20:45:36.596571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.596581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.596763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.560 [2024-11-25 20:45:36.596774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.596785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.596837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:28.560 [2024-11-25 20:45:36.596848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.596859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.596920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.560 [2024-11-25 20:45:36.596930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.596940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.596990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.560 [2024-11-25 20:45:36.597001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.560 [2024-11-25 20:45:36.597012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.560 [2024-11-25 20:45:36.597022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.560 [2024-11-25 20:45:36.597168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.470 ms, result 0 00:27:29.940 00:27:29.940 00:27:29.940 20:45:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:29.940 [2024-11-25 20:45:37.976746] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:27:29.940 [2024-11-25 20:45:37.976879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80091 ] 00:27:30.199 [2024-11-25 20:45:38.160751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.199 [2024-11-25 20:45:38.289950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.770 [2024-11-25 20:45:38.686456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:30.770 [2024-11-25 20:45:38.686537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:30.770 [2024-11-25 20:45:38.851143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.851205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:30.770 [2024-11-25 20:45:38.851237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:30.770 [2024-11-25 20:45:38.851248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.851299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.851315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:30.770 [2024-11-25 20:45:38.851326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:30.770 [2024-11-25 20:45:38.851336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.851369] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:30.770 [2024-11-25 20:45:38.852378] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:30.770 [2024-11-25 20:45:38.852407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.852419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:30.770 [2024-11-25 20:45:38.852430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:27:30.770 [2024-11-25 20:45:38.852441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.854944] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:30.770 [2024-11-25 20:45:38.875232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.875267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:30.770 [2024-11-25 20:45:38.875283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.323 ms 00:27:30.770 [2024-11-25 20:45:38.875294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.875368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.875382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:30.770 [2024-11-25 20:45:38.875393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:30.770 [2024-11-25 20:45:38.875403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.888155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.888180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:30.770 [2024-11-25 20:45:38.888195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.702 ms 00:27:30.770 [2024-11-25 20:45:38.888210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.888312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.888325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:30.770 [2024-11-25 20:45:38.888336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:30.770 [2024-11-25 20:45:38.888355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.888411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.888424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:30.770 [2024-11-25 20:45:38.888435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:30.770 [2024-11-25 20:45:38.888446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.888478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:30.770 [2024-11-25 20:45:38.894442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.894472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:30.770 [2024-11-25 20:45:38.894489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.981 ms 00:27:30.770 [2024-11-25 20:45:38.894500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.894533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.894545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:30.770 [2024-11-25 20:45:38.894556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:30.770 [2024-11-25 20:45:38.894566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.894603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:30.770 [2024-11-25 20:45:38.894652] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:30.770 [2024-11-25 20:45:38.894693] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:30.770 [2024-11-25 20:45:38.894716] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:30.770 [2024-11-25 20:45:38.894807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:30.770 [2024-11-25 20:45:38.894820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:30.770 [2024-11-25 20:45:38.894834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:30.770 [2024-11-25 20:45:38.894847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:30.770 [2024-11-25 20:45:38.894859] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:30.770 [2024-11-25 20:45:38.894887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:30.770 [2024-11-25 20:45:38.894898] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:30.770 [2024-11-25 20:45:38.894908] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:30.770 [2024-11-25 20:45:38.894923] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:30.770 [2024-11-25 20:45:38.894936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.894948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:30.770 [2024-11-25 20:45:38.894959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:27:30.770 [2024-11-25 20:45:38.894970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.895044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.770 [2024-11-25 20:45:38.895055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:30.770 [2024-11-25 20:45:38.895066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:30.770 [2024-11-25 20:45:38.895077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.770 [2024-11-25 20:45:38.895179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:30.770 [2024-11-25 20:45:38.895199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:30.770 [2024-11-25 20:45:38.895210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.770 [2024-11-25 20:45:38.895221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.770 [2024-11-25 20:45:38.895233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:30.770 [2024-11-25 20:45:38.895245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:30.770 [2024-11-25 20:45:38.895255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:30.770 [2024-11-25 20:45:38.895264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:30.770 [2024-11-25 20:45:38.895274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:30.770 [2024-11-25 20:45:38.895285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.770 [2024-11-25 20:45:38.895294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:30.770 [2024-11-25 20:45:38.895304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:30.770 [2024-11-25 20:45:38.895313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.770 [2024-11-25 20:45:38.895346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:30.770 [2024-11-25 20:45:38.895357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:30.770 [2024-11-25 20:45:38.895367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.770 [2024-11-25 20:45:38.895376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:30.770 [2024-11-25 20:45:38.895385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:30.770 [2024-11-25 20:45:38.895395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.770 [2024-11-25 20:45:38.895405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:30.770 [2024-11-25 20:45:38.895415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.771 [2024-11-25 20:45:38.895434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:30.771 [2024-11-25 20:45:38.895443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.771 [2024-11-25 20:45:38.895462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:30.771 [2024-11-25 20:45:38.895471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.771 [2024-11-25 20:45:38.895489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:30.771 [2024-11-25 20:45:38.895498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.771 [2024-11-25 20:45:38.895516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:30.771 [2024-11-25 20:45:38.895525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.771 [2024-11-25 20:45:38.895543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:30.771 [2024-11-25 20:45:38.895552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:30.771 [2024-11-25 20:45:38.895561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.771 [2024-11-25 20:45:38.895579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:30.771 [2024-11-25 20:45:38.895589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:30.771 [2024-11-25 20:45:38.895599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:30.771 [2024-11-25 20:45:38.895618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:30.771 [2024-11-25 20:45:38.895628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895638] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:30.771 [2024-11-25 20:45:38.895649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:30.771 [2024-11-25 20:45:38.895659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.771 [2024-11-25 20:45:38.895670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.771 [2024-11-25 20:45:38.895690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:30.771 [2024-11-25 20:45:38.895700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:30.771 [2024-11-25 20:45:38.895710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:30.771 [2024-11-25 20:45:38.895720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:30.771 [2024-11-25 20:45:38.895729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:30.771 [2024-11-25 20:45:38.895739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:30.771 [2024-11-25 20:45:38.895750] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:30.771 [2024-11-25 20:45:38.895763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:30.771 [2024-11-25 20:45:38.895791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:30.771 [2024-11-25 20:45:38.895802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:30.771 [2024-11-25 20:45:38.895812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:30.771 [2024-11-25 20:45:38.895823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:30.771 [2024-11-25 20:45:38.895833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:30.771 [2024-11-25 20:45:38.895844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:30.771 [2024-11-25 20:45:38.895855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:30.771 [2024-11-25 20:45:38.895866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:30.771 [2024-11-25 20:45:38.895876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:30.771 [2024-11-25 20:45:38.895934] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:30.771 [2024-11-25 20:45:38.895946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.771 [2024-11-25 20:45:38.895968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:30.771 [2024-11-25 20:45:38.895980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:30.771 [2024-11-25 20:45:38.895990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:30.771 [2024-11-25 20:45:38.896001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.771 [2024-11-25 20:45:38.896012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:30.771 [2024-11-25 20:45:38.896022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:27:30.771 [2024-11-25 20:45:38.896032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:38.945943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:38.945978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.032 [2024-11-25 20:45:38.945992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.940 ms 00:27:31.032 [2024-11-25 20:45:38.946024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:38.946108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:38.946120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:31.032 [2024-11-25 20:45:38.946131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:31.032 [2024-11-25 20:45:38.946141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.009860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.009895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.032 [2024-11-25 20:45:39.009909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.731 ms 00:27:31.032 [2024-11-25 20:45:39.009921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.009959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.009975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.032 [2024-11-25 20:45:39.009987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:31.032 [2024-11-25 20:45:39.009997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.010846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.010867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.032 [2024-11-25 20:45:39.010880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:27:31.032 [2024-11-25 20:45:39.010891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.011030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.011043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.032 [2024-11-25 20:45:39.011062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:31.032 [2024-11-25 20:45:39.011073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.033539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.033571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.032 [2024-11-25 20:45:39.033606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.481 ms 00:27:31.032 [2024-11-25 20:45:39.033618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.053848] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:31.032 [2024-11-25 20:45:39.053884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:31.032 [2024-11-25 20:45:39.053900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.053913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:31.032 [2024-11-25 20:45:39.053925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.192 ms 00:27:31.032 [2024-11-25 20:45:39.053935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.082388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.082424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:31.032 [2024-11-25 20:45:39.082453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.456 ms 00:27:31.032 [2024-11-25 20:45:39.082464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.099681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.099713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:31.032 [2024-11-25 20:45:39.099725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.188 ms 00:27:31.032 [2024-11-25 20:45:39.099734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.117076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.117108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:31.032 [2024-11-25 20:45:39.117120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.333 ms 00:27:31.032 [2024-11-25 20:45:39.117146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.032 [2024-11-25 20:45:39.117904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.032 [2024-11-25 20:45:39.117927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:31.032 [2024-11-25 20:45:39.117943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:27:31.032 [2024-11-25 20:45:39.117954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.291 [2024-11-25 20:45:39.216490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.291 [2024-11-25 20:45:39.216553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:31.291 [2024-11-25 20:45:39.216578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.670 ms 00:27:31.291 [2024-11-25 20:45:39.216590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.291 [2024-11-25 20:45:39.227399] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:31.291 [2024-11-25 20:45:39.231802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.291 [2024-11-25 20:45:39.231829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:31.291 [2024-11-25 20:45:39.231860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.180 ms 00:27:31.291 [2024-11-25 20:45:39.231871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.291 [2024-11-25 20:45:39.232007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.291 [2024-11-25 20:45:39.232021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:31.291 [2024-11-25 20:45:39.232034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:31.291 [2024-11-25 20:45:39.232050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.291 [2024-11-25 20:45:39.232144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.291 [2024-11-25 20:45:39.232157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:31.291 [2024-11-25 20:45:39.232169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:31.291 [2024-11-25 20:45:39.232179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.291 [2024-11-25 20:45:39.232208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.291 [2024-11-25 20:45:39.232220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:31.291 [2024-11-25 20:45:39.232230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:31.291 [2024-11-25 20:45:39.232241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.291 [2024-11-25 20:45:39.232285] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:31.292 [2024-11-25 20:45:39.232298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.292 [2024-11-25 20:45:39.232309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:31.292 [2024-11-25 20:45:39.232319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:31.292 [2024-11-25 20:45:39.232329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.292 [2024-11-25 20:45:39.269942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.292 [2024-11-25 20:45:39.269980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:31.292 [2024-11-25 20:45:39.269994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.641 ms 00:27:31.292 [2024-11-25 20:45:39.270013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.292 [2024-11-25 20:45:39.270096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.292 [2024-11-25 20:45:39.270111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:31.292 [2024-11-25 20:45:39.270123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:31.292 [2024-11-25 20:45:39.270133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.292 [2024-11-25 20:45:39.271653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.661 ms, result 0 00:27:32.671  [2024-11-25T20:45:41.805Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-25T20:45:42.741Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-25T20:45:43.675Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-25T20:45:44.609Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-25T20:45:45.544Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-25T20:45:46.920Z] Copying: 154/1024 [MB] (26 MBps) [2024-11-25T20:45:47.486Z] Copying: 180/1024 [MB] (25 MBps) [2024-11-25T20:45:48.861Z] Copying: 206/1024 [MB] (25 MBps) [2024-11-25T20:45:49.794Z] Copying: 232/1024 [MB] (25 MBps) [2024-11-25T20:45:50.729Z] Copying: 257/1024 [MB] (25 MBps) [2024-11-25T20:45:51.664Z] Copying: 283/1024 [MB] (25 MBps) [2024-11-25T20:45:52.599Z] Copying: 309/1024 [MB] (26 MBps) [2024-11-25T20:45:53.534Z] Copying: 335/1024 [MB] (25 MBps) [2024-11-25T20:45:54.910Z] Copying: 361/1024 [MB] (26 MBps) [2024-11-25T20:45:55.476Z] Copying: 387/1024 [MB] (25 MBps) [2024-11-25T20:45:56.850Z] Copying: 414/1024 [MB] (26 MBps) [2024-11-25T20:45:57.784Z] Copying: 440/1024 [MB] (26 MBps) [2024-11-25T20:45:58.718Z] Copying: 467/1024 [MB] (26 MBps) [2024-11-25T20:45:59.653Z] Copying: 494/1024 [MB] (26 MBps) [2024-11-25T20:46:00.587Z] Copying: 520/1024 [MB] (26 MBps) [2024-11-25T20:46:01.522Z] Copying: 547/1024 [MB] (26 MBps) [2024-11-25T20:46:02.479Z] Copying: 574/1024 [MB] (27 MBps) [2024-11-25T20:46:03.855Z] Copying: 601/1024 [MB] (26 MBps) [2024-11-25T20:46:04.791Z] Copying: 627/1024 [MB] (26 MBps) [2024-11-25T20:46:05.728Z] Copying: 653/1024 [MB] (25 MBps) [2024-11-25T20:46:06.662Z] Copying: 680/1024 [MB] (26 MBps) [2024-11-25T20:46:07.596Z] Copying: 708/1024 [MB] (28 MBps) [2024-11-25T20:46:08.531Z] Copying: 733/1024 [MB] (25 MBps) [2024-11-25T20:46:09.472Z] Copying: 760/1024 [MB] (26 MBps) [2024-11-25T20:46:10.843Z] Copying: 786/1024 [MB] (26 MBps) [2024-11-25T20:46:11.777Z] Copying: 812/1024 [MB] (26 MBps) [2024-11-25T20:46:12.713Z] Copying: 839/1024 [MB] (26 MBps) [2024-11-25T20:46:13.648Z] Copying: 865/1024 [MB] (26 MBps) [2024-11-25T20:46:14.584Z] Copying: 891/1024 [MB] (26 MBps) [2024-11-25T20:46:15.519Z] Copying: 917/1024 [MB] (26 MBps) [2024-11-25T20:46:16.481Z] Copying: 943/1024 [MB] (26 MBps) [2024-11-25T20:46:17.858Z] Copying: 970/1024 [MB] (26 MBps) [2024-11-25T20:46:18.794Z] Copying: 996/1024 [MB] (26 MBps) [2024-11-25T20:46:18.794Z] Copying: 1022/1024 [MB] (26 MBps) [2024-11-25T20:46:18.794Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-25 20:46:18.734842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.658 [2024-11-25 20:46:18.734935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:10.658 [2024-11-25 20:46:18.734958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:10.658 [2024-11-25 20:46:18.734971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.658 [2024-11-25 20:46:18.735001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:10.658 [2024-11-25 20:46:18.741334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.658 [2024-11-25 20:46:18.741383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:10.658 [2024-11-25 20:46:18.741408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:28:10.658 [2024-11-25 20:46:18.741421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.658 [2024-11-25 20:46:18.741703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.658 [2024-11-25 20:46:18.741719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:10.658 [2024-11-25 20:46:18.741732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:28:10.658 [2024-11-25 20:46:18.741745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.658 [2024-11-25 20:46:18.745004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.658 [2024-11-25 20:46:18.745028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:10.658 [2024-11-25 20:46:18.745039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:28:10.658 [2024-11-25 20:46:18.745071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.658 [2024-11-25 20:46:18.749998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.658 [2024-11-25 20:46:18.750034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:10.658 [2024-11-25 20:46:18.750060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.912 ms 00:28:10.658 [2024-11-25 20:46:18.750071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.658 [2024-11-25 20:46:18.787362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.658 [2024-11-25 20:46:18.787405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:10.658 [2024-11-25 20:46:18.787421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.245 ms 00:28:10.658 [2024-11-25 20:46:18.787431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.809152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.919 [2024-11-25 20:46:18.809191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:10.919 [2024-11-25 20:46:18.809206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.712 ms 00:28:10.919 [2024-11-25 20:46:18.809218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.809395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.919 [2024-11-25 20:46:18.809411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:10.919 [2024-11-25 20:46:18.809423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:28:10.919 [2024-11-25 20:46:18.809434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.845705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.919 [2024-11-25 20:46:18.845750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:10.919 [2024-11-25 20:46:18.845780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.311 ms 00:28:10.919 [2024-11-25 20:46:18.845790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.880909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.919 [2024-11-25 20:46:18.880949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:10.919 [2024-11-25 20:46:18.880978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.133 ms 00:28:10.919 [2024-11-25 20:46:18.880989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.915993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.919 [2024-11-25 20:46:18.916046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:10.919 [2024-11-25 20:46:18.916061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.020 ms 00:28:10.919 [2024-11-25 20:46:18.916071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.951559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.919 [2024-11-25 20:46:18.951600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:10.919 [2024-11-25 20:46:18.951614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.461 ms 00:28:10.919 [2024-11-25 20:46:18.951624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.919 [2024-11-25 20:46:18.951679] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:10.919 [2024-11-25 20:46:18.951706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:10.919 [2024-11-25 20:46:18.951976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.951987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.951997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:10.920 [2024-11-25 20:46:18.952821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:10.920 [2024-11-25 20:46:18.952837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 784b2adc-fa29-40f2-b6cc-6376a96317b8 00:28:10.920 [2024-11-25 20:46:18.952848] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:10.920 [2024-11-25 20:46:18.952859] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:10.920 [2024-11-25 20:46:18.952869] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:10.920 [2024-11-25 20:46:18.952880] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:10.920 [2024-11-25 20:46:18.952903] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:10.920 [2024-11-25 20:46:18.952914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:10.920 [2024-11-25 20:46:18.952924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:10.920 [2024-11-25 20:46:18.952934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:10.920 [2024-11-25 20:46:18.952943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:10.920 [2024-11-25 20:46:18.952953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.920 [2024-11-25 20:46:18.952964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:10.920 [2024-11-25 20:46:18.952975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:28:10.920 [2024-11-25 20:46:18.952985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.920 [2024-11-25 20:46:18.974242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.920 [2024-11-25 20:46:18.974279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:10.920 [2024-11-25 20:46:18.974293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.218 ms 00:28:10.921 [2024-11-25 20:46:18.974304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.921 [2024-11-25 20:46:18.974918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.921 [2024-11-25 20:46:18.974941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:10.921 [2024-11-25 20:46:18.974958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:28:10.921 [2024-11-25 20:46:18.974969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.921 [2024-11-25 20:46:19.029716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.921 [2024-11-25 20:46:19.029767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.921 [2024-11-25 20:46:19.029799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.921 [2024-11-25 20:46:19.029810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.921 [2024-11-25 20:46:19.029883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.921 [2024-11-25 20:46:19.029896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.921 [2024-11-25 20:46:19.029915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.921 [2024-11-25 20:46:19.029925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.921 [2024-11-25 20:46:19.029998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.921 [2024-11-25 20:46:19.030012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.921 [2024-11-25 20:46:19.030024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.921 [2024-11-25 20:46:19.030034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.921 [2024-11-25 20:46:19.030054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.921 [2024-11-25 20:46:19.030065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.921 [2024-11-25 20:46:19.030076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.921 [2024-11-25 20:46:19.030092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.165978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.166076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.180 [2024-11-25 20:46:19.166094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.166106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.180 [2024-11-25 20:46:19.270113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.180 [2024-11-25 20:46:19.270307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.180 [2024-11-25 20:46:19.270420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.180 [2024-11-25 20:46:19.270574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.180 [2024-11-25 20:46:19.270650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.180 [2024-11-25 20:46:19.270735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.180 [2024-11-25 20:46:19.270808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.180 [2024-11-25 20:46:19.270819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.180 [2024-11-25 20:46:19.270830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.180 [2024-11-25 20:46:19.270975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.979 ms, result 0 00:28:12.558 00:28:12.558 00:28:12.558 20:46:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:14.462 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:14.462 20:46:22 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:14.462 [2024-11-25 20:46:22.255486] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:28:14.462 [2024-11-25 20:46:22.256155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80541 ] 00:28:14.462 [2024-11-25 20:46:22.438681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.462 [2024-11-25 20:46:22.583294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.028 [2024-11-25 20:46:23.002424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.028 [2024-11-25 20:46:23.002494] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.288 [2024-11-25 20:46:23.168759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.168827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:15.288 [2024-11-25 20:46:23.168861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:15.288 [2024-11-25 20:46:23.168872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.168930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.168947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:15.288 [2024-11-25 20:46:23.168959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:15.288 [2024-11-25 20:46:23.168970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.168994] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:15.288 [2024-11-25 20:46:23.169965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:15.288 [2024-11-25 20:46:23.169993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.170006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:15.288 [2024-11-25 20:46:23.170018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:28:15.288 [2024-11-25 20:46:23.170029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.172438] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:15.288 [2024-11-25 20:46:23.192865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.192905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:15.288 [2024-11-25 20:46:23.192940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.461 ms 00:28:15.288 [2024-11-25 20:46:23.192952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.193037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.193051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:15.288 [2024-11-25 20:46:23.193064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:15.288 [2024-11-25 20:46:23.193075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.205723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.205756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:15.288 [2024-11-25 20:46:23.205773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.589 ms 00:28:15.288 [2024-11-25 20:46:23.205789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.205884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.205898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:15.288 [2024-11-25 20:46:23.205911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:15.288 [2024-11-25 20:46:23.205922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.205988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.206001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:15.288 [2024-11-25 20:46:23.206012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:15.288 [2024-11-25 20:46:23.206032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.206068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:15.288 [2024-11-25 20:46:23.212010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.212040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:15.288 [2024-11-25 20:46:23.212057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.962 ms 00:28:15.288 [2024-11-25 20:46:23.212067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.212101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.288 [2024-11-25 20:46:23.212112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:15.288 [2024-11-25 20:46:23.212124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:15.288 [2024-11-25 20:46:23.212134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.288 [2024-11-25 20:46:23.212174] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:15.288 [2024-11-25 20:46:23.212201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:15.288 [2024-11-25 20:46:23.212241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:15.288 [2024-11-25 20:46:23.212263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:15.288 [2024-11-25 20:46:23.212368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:15.288 [2024-11-25 20:46:23.212383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:15.288 [2024-11-25 20:46:23.212414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:15.288 [2024-11-25 20:46:23.212428] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:15.288 [2024-11-25 20:46:23.212442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:15.289 [2024-11-25 20:46:23.212455] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:15.289 [2024-11-25 20:46:23.212466] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:15.289 [2024-11-25 20:46:23.212476] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:15.289 [2024-11-25 20:46:23.212491] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:15.289 [2024-11-25 20:46:23.212502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.289 [2024-11-25 20:46:23.212512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:15.289 [2024-11-25 20:46:23.212523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:28:15.289 [2024-11-25 20:46:23.212533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.289 [2024-11-25 20:46:23.212606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.289 [2024-11-25 20:46:23.212618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:15.289 [2024-11-25 20:46:23.212629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:15.289 [2024-11-25 20:46:23.212639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.289 [2024-11-25 20:46:23.212745] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:15.289 [2024-11-25 20:46:23.212762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:15.289 [2024-11-25 20:46:23.212773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.289 [2024-11-25 20:46:23.212785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.212796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:15.289 [2024-11-25 20:46:23.212806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.212817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:15.289 [2024-11-25 20:46:23.212827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:15.289 [2024-11-25 20:46:23.212837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:15.289 [2024-11-25 20:46:23.212847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.289 [2024-11-25 20:46:23.212858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:15.289 [2024-11-25 20:46:23.212868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:15.289 [2024-11-25 20:46:23.212877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.289 [2024-11-25 20:46:23.212898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:15.289 [2024-11-25 20:46:23.212909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:15.289 [2024-11-25 20:46:23.212920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.212930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:15.289 [2024-11-25 20:46:23.212940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:15.289 [2024-11-25 20:46:23.212951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.212961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:15.289 [2024-11-25 20:46:23.212970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:15.289 [2024-11-25 20:46:23.212980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.289 [2024-11-25 20:46:23.212990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:15.289 [2024-11-25 20:46:23.213000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.289 [2024-11-25 20:46:23.213019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:15.289 [2024-11-25 20:46:23.213028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.289 [2024-11-25 20:46:23.213047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:15.289 [2024-11-25 20:46:23.213056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.289 [2024-11-25 20:46:23.213075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:15.289 [2024-11-25 20:46:23.213084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.289 [2024-11-25 20:46:23.213103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:15.289 [2024-11-25 20:46:23.213113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:15.289 [2024-11-25 20:46:23.213122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.289 [2024-11-25 20:46:23.213131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:15.289 [2024-11-25 20:46:23.213140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:15.289 [2024-11-25 20:46:23.213149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:15.289 [2024-11-25 20:46:23.213168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:15.289 [2024-11-25 20:46:23.213180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:15.289 [2024-11-25 20:46:23.213200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:15.289 [2024-11-25 20:46:23.213211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.289 [2024-11-25 20:46:23.213221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.289 [2024-11-25 20:46:23.213232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:15.289 [2024-11-25 20:46:23.213242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:15.289 [2024-11-25 20:46:23.213252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:15.289 [2024-11-25 20:46:23.213262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:15.289 [2024-11-25 20:46:23.213271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:15.289 [2024-11-25 20:46:23.213281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:15.289 [2024-11-25 20:46:23.213293] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:15.289 [2024-11-25 20:46:23.213306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:15.289 [2024-11-25 20:46:23.213334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:15.289 [2024-11-25 20:46:23.213355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:15.289 [2024-11-25 20:46:23.213367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:15.289 [2024-11-25 20:46:23.213378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:15.289 [2024-11-25 20:46:23.213390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:15.289 [2024-11-25 20:46:23.213401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:15.289 [2024-11-25 20:46:23.213412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:15.289 [2024-11-25 20:46:23.213423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:15.289 [2024-11-25 20:46:23.213434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:15.289 [2024-11-25 20:46:23.213489] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:15.289 [2024-11-25 20:46:23.213501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:15.289 [2024-11-25 20:46:23.213524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:15.289 [2024-11-25 20:46:23.213534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:15.289 [2024-11-25 20:46:23.213556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:15.289 [2024-11-25 20:46:23.213568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.289 [2024-11-25 20:46:23.213579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:15.289 [2024-11-25 20:46:23.213589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:28:15.289 [2024-11-25 20:46:23.213599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.289 [2024-11-25 20:46:23.263548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.289 [2024-11-25 20:46:23.263631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:15.289 [2024-11-25 20:46:23.263649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.964 ms 00:28:15.289 [2024-11-25 20:46:23.263666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.289 [2024-11-25 20:46:23.263782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.289 [2024-11-25 20:46:23.263795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:15.289 [2024-11-25 20:46:23.263807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:15.289 [2024-11-25 20:46:23.263818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.289 [2024-11-25 20:46:23.329677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.329758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:15.290 [2024-11-25 20:46:23.329777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.846 ms 00:28:15.290 [2024-11-25 20:46:23.329788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.290 [2024-11-25 20:46:23.329862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.329880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:15.290 [2024-11-25 20:46:23.329893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:15.290 [2024-11-25 20:46:23.329904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.290 [2024-11-25 20:46:23.330761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.330782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:15.290 [2024-11-25 20:46:23.330796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:28:15.290 [2024-11-25 20:46:23.330808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.290 [2024-11-25 20:46:23.330952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.330966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:15.290 [2024-11-25 20:46:23.330984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:28:15.290 [2024-11-25 20:46:23.330996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.290 [2024-11-25 20:46:23.354606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.354653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:15.290 [2024-11-25 20:46:23.354674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.623 ms 00:28:15.290 [2024-11-25 20:46:23.354686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.290 [2024-11-25 20:46:23.374815] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:15.290 [2024-11-25 20:46:23.374853] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:15.290 [2024-11-25 20:46:23.374887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.374898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:15.290 [2024-11-25 20:46:23.374912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.072 ms 00:28:15.290 [2024-11-25 20:46:23.374922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.290 [2024-11-25 20:46:23.405025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.290 [2024-11-25 20:46:23.405065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:15.290 [2024-11-25 20:46:23.405080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.093 ms 00:28:15.290 [2024-11-25 20:46:23.405092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.423549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.423586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:15.549 [2024-11-25 20:46:23.423601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.426 ms 00:28:15.549 [2024-11-25 20:46:23.423612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.442085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.442119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:15.549 [2024-11-25 20:46:23.442133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.461 ms 00:28:15.549 [2024-11-25 20:46:23.442144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.442934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.442954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:15.549 [2024-11-25 20:46:23.442973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:28:15.549 [2024-11-25 20:46:23.442984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.541615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.541718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:15.549 [2024-11-25 20:46:23.541746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.764 ms 00:28:15.549 [2024-11-25 20:46:23.541759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.554507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:15.549 [2024-11-25 20:46:23.559715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.559746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:15.549 [2024-11-25 20:46:23.559780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.897 ms 00:28:15.549 [2024-11-25 20:46:23.559792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.559930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.559945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:15.549 [2024-11-25 20:46:23.559959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:15.549 [2024-11-25 20:46:23.559975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.560063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.560076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:15.549 [2024-11-25 20:46:23.560087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:15.549 [2024-11-25 20:46:23.560099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.560126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.560138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:15.549 [2024-11-25 20:46:23.560149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:15.549 [2024-11-25 20:46:23.560160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.560207] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:15.549 [2024-11-25 20:46:23.560220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.560231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:15.549 [2024-11-25 20:46:23.560242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:15.549 [2024-11-25 20:46:23.560253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.598284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.598324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:15.549 [2024-11-25 20:46:23.598363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.070 ms 00:28:15.549 [2024-11-25 20:46:23.598381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.549 [2024-11-25 20:46:23.598476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.549 [2024-11-25 20:46:23.598491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:15.549 [2024-11-25 20:46:23.598503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:15.549 [2024-11-25 20:46:23.598514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.550 [2024-11-25 20:46:23.600063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 431.467 ms, result 0 00:28:16.484  [2024-11-25T20:46:25.994Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-25T20:46:26.928Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-25T20:46:27.863Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-25T20:46:28.798Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-25T20:46:29.732Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-25T20:46:30.714Z] Copying: 147/1024 [MB] (24 MBps) [2024-11-25T20:46:31.656Z] Copying: 172/1024 [MB] (25 MBps) [2024-11-25T20:46:33.029Z] Copying: 197/1024 [MB] (24 MBps) [2024-11-25T20:46:33.966Z] Copying: 221/1024 [MB] (24 MBps) [2024-11-25T20:46:34.902Z] Copying: 246/1024 [MB] (24 MBps) [2024-11-25T20:46:35.840Z] Copying: 271/1024 [MB] (24 MBps) [2024-11-25T20:46:36.778Z] Copying: 296/1024 [MB] (25 MBps) [2024-11-25T20:46:37.716Z] Copying: 321/1024 [MB] (25 MBps) [2024-11-25T20:46:38.656Z] Copying: 346/1024 [MB] (25 MBps) [2024-11-25T20:46:39.594Z] Copying: 371/1024 [MB] (25 MBps) [2024-11-25T20:46:40.976Z] Copying: 397/1024 [MB] (25 MBps) [2024-11-25T20:46:41.915Z] Copying: 422/1024 [MB] (25 MBps) [2024-11-25T20:46:42.853Z] Copying: 447/1024 [MB] (25 MBps) [2024-11-25T20:46:43.791Z] Copying: 473/1024 [MB] (25 MBps) [2024-11-25T20:46:44.793Z] Copying: 498/1024 [MB] (25 MBps) [2024-11-25T20:46:45.730Z] Copying: 523/1024 [MB] (25 MBps) [2024-11-25T20:46:46.665Z] Copying: 549/1024 [MB] (25 MBps) [2024-11-25T20:46:47.602Z] Copying: 574/1024 [MB] (25 MBps) [2024-11-25T20:46:48.980Z] Copying: 599/1024 [MB] (25 MBps) [2024-11-25T20:46:49.916Z] Copying: 624/1024 [MB] (25 MBps) [2024-11-25T20:46:50.855Z] Copying: 649/1024 [MB] (24 MBps) [2024-11-25T20:46:51.792Z] Copying: 674/1024 [MB] (25 MBps) [2024-11-25T20:46:52.729Z] Copying: 701/1024 [MB] (26 MBps) [2024-11-25T20:46:53.667Z] Copying: 727/1024 [MB] (26 MBps) [2024-11-25T20:46:54.604Z] Copying: 752/1024 [MB] (25 MBps) [2024-11-25T20:46:55.982Z] Copying: 777/1024 [MB] (25 MBps) [2024-11-25T20:46:56.919Z] Copying: 802/1024 [MB] (24 MBps) [2024-11-25T20:46:57.855Z] Copying: 827/1024 [MB] (25 MBps) [2024-11-25T20:46:58.825Z] Copying: 852/1024 [MB] (25 MBps) [2024-11-25T20:46:59.772Z] Copying: 878/1024 [MB] (25 MBps) [2024-11-25T20:47:00.710Z] Copying: 902/1024 [MB] (24 MBps) [2024-11-25T20:47:01.648Z] Copying: 927/1024 [MB] (24 MBps) [2024-11-25T20:47:02.585Z] Copying: 952/1024 [MB] (24 MBps) [2024-11-25T20:47:03.964Z] Copying: 977/1024 [MB] (24 MBps) [2024-11-25T20:47:04.901Z] Copying: 1002/1024 [MB] (25 MBps) [2024-11-25T20:47:05.161Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-25T20:47:05.161Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-25 20:47:05.065794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.025 [2024-11-25 20:47:05.065862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:57.025 [2024-11-25 20:47:05.065897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:57.025 [2024-11-25 20:47:05.065909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.025 [2024-11-25 20:47:05.067625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:57.025 [2024-11-25 20:47:05.073376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.025 [2024-11-25 20:47:05.073428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:57.025 [2024-11-25 20:47:05.073445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.716 ms 00:28:57.025 [2024-11-25 20:47:05.073456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.025 [2024-11-25 20:47:05.084040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.025 [2024-11-25 20:47:05.084074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:57.025 [2024-11-25 20:47:05.084104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.728 ms 00:28:57.025 [2024-11-25 20:47:05.084134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.025 [2024-11-25 20:47:05.106772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.025 [2024-11-25 20:47:05.106808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:57.025 [2024-11-25 20:47:05.106824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.655 ms 00:28:57.025 [2024-11-25 20:47:05.106836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.025 [2024-11-25 20:47:05.111845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.025 [2024-11-25 20:47:05.111872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:57.025 [2024-11-25 20:47:05.111885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:28:57.025 [2024-11-25 20:47:05.111895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.025 [2024-11-25 20:47:05.148067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.025 [2024-11-25 20:47:05.148122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:57.025 [2024-11-25 20:47:05.148135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.182 ms 00:28:57.025 [2024-11-25 20:47:05.148145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.169047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.286 [2024-11-25 20:47:05.169081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:57.286 [2024-11-25 20:47:05.169095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.898 ms 00:28:57.286 [2024-11-25 20:47:05.169105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.274947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.286 [2024-11-25 20:47:05.274999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:57.286 [2024-11-25 20:47:05.275014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.972 ms 00:28:57.286 [2024-11-25 20:47:05.275025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.310638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.286 [2024-11-25 20:47:05.310671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:57.286 [2024-11-25 20:47:05.310701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.651 ms 00:28:57.286 [2024-11-25 20:47:05.310711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.346394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.286 [2024-11-25 20:47:05.346427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:57.286 [2024-11-25 20:47:05.346457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.703 ms 00:28:57.286 [2024-11-25 20:47:05.346467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.381261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.286 [2024-11-25 20:47:05.381293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:57.286 [2024-11-25 20:47:05.381306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.812 ms 00:28:57.286 [2024-11-25 20:47:05.381332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.416175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.286 [2024-11-25 20:47:05.416209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:57.286 [2024-11-25 20:47:05.416223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.815 ms 00:28:57.286 [2024-11-25 20:47:05.416233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.286 [2024-11-25 20:47:05.416270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:57.286 [2024-11-25 20:47:05.416288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97024 / 261120 wr_cnt: 1 state: open 00:28:57.286 [2024-11-25 20:47:05.416302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:57.286 [2024-11-25 20:47:05.416992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:57.287 [2024-11-25 20:47:05.417431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:57.287 [2024-11-25 20:47:05.417441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 784b2adc-fa29-40f2-b6cc-6376a96317b8 00:28:57.287 [2024-11-25 20:47:05.417452] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97024 00:28:57.287 [2024-11-25 20:47:05.417463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97984 00:28:57.287 [2024-11-25 20:47:05.417474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97024 00:28:57.287 [2024-11-25 20:47:05.417485] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:28:57.287 [2024-11-25 20:47:05.417513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:57.287 [2024-11-25 20:47:05.417524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:57.287 [2024-11-25 20:47:05.417534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:57.287 [2024-11-25 20:47:05.417544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:57.287 [2024-11-25 20:47:05.417553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:57.287 [2024-11-25 20:47:05.417563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.287 [2024-11-25 20:47:05.417575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:57.287 [2024-11-25 20:47:05.417586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:28:57.287 [2024-11-25 20:47:05.417596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.546 [2024-11-25 20:47:05.438117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.546 [2024-11-25 20:47:05.438149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:57.547 [2024-11-25 20:47:05.438169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.520 ms 00:28:57.547 [2024-11-25 20:47:05.438180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.547 [2024-11-25 20:47:05.438758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.547 [2024-11-25 20:47:05.438772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:57.547 [2024-11-25 20:47:05.438786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:28:57.547 [2024-11-25 20:47:05.438796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.547 [2024-11-25 20:47:05.492527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.547 [2024-11-25 20:47:05.492587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:57.547 [2024-11-25 20:47:05.492603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.547 [2024-11-25 20:47:05.492615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.547 [2024-11-25 20:47:05.492700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.547 [2024-11-25 20:47:05.492712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:57.547 [2024-11-25 20:47:05.492723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.547 [2024-11-25 20:47:05.492733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.547 [2024-11-25 20:47:05.492838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.547 [2024-11-25 20:47:05.492852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:57.547 [2024-11-25 20:47:05.492867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.547 [2024-11-25 20:47:05.492877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.547 [2024-11-25 20:47:05.492896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.547 [2024-11-25 20:47:05.492908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:57.547 [2024-11-25 20:47:05.492918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.547 [2024-11-25 20:47:05.492929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.547 [2024-11-25 20:47:05.629101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.547 [2024-11-25 20:47:05.629185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:57.547 [2024-11-25 20:47:05.629204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.547 [2024-11-25 20:47:05.629215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.734882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.734961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:57.806 [2024-11-25 20:47:05.734980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.734992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.735141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:57.806 [2024-11-25 20:47:05.735152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.735167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.735228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:57.806 [2024-11-25 20:47:05.735240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.735250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.735431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:57.806 [2024-11-25 20:47:05.735444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.735455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.735531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:57.806 [2024-11-25 20:47:05.735542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.735553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.735614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:57.806 [2024-11-25 20:47:05.735626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.735636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.806 [2024-11-25 20:47:05.735707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:57.806 [2024-11-25 20:47:05.735718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.806 [2024-11-25 20:47:05.735729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.806 [2024-11-25 20:47:05.735906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.203 ms, result 0 00:28:59.713 00:28:59.713 00:28:59.713 20:47:07 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:59.713 [2024-11-25 20:47:07.454732] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:28:59.713 [2024-11-25 20:47:07.454858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80988 ] 00:28:59.713 [2024-11-25 20:47:07.638436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.713 [2024-11-25 20:47:07.770752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.283 [2024-11-25 20:47:08.173496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:00.283 [2024-11-25 20:47:08.173576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:00.283 [2024-11-25 20:47:08.339321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.339393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:00.283 [2024-11-25 20:47:08.339412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:00.283 [2024-11-25 20:47:08.339423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.339481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.339499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:00.283 [2024-11-25 20:47:08.339511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:00.283 [2024-11-25 20:47:08.339522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.339545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:00.283 [2024-11-25 20:47:08.340565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:00.283 [2024-11-25 20:47:08.340592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.340603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:00.283 [2024-11-25 20:47:08.340615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:29:00.283 [2024-11-25 20:47:08.340625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.343122] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:00.283 [2024-11-25 20:47:08.361939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.361974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:00.283 [2024-11-25 20:47:08.362005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.848 ms 00:29:00.283 [2024-11-25 20:47:08.362016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.362083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.362097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:00.283 [2024-11-25 20:47:08.362108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:00.283 [2024-11-25 20:47:08.362118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.374572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.374600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:00.283 [2024-11-25 20:47:08.374613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.399 ms 00:29:00.283 [2024-11-25 20:47:08.374628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.374714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.374729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:00.283 [2024-11-25 20:47:08.374739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:00.283 [2024-11-25 20:47:08.374749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.374804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.374817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:00.283 [2024-11-25 20:47:08.374827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:00.283 [2024-11-25 20:47:08.374836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.374864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:00.283 [2024-11-25 20:47:08.380647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.380678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:00.283 [2024-11-25 20:47:08.380694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.798 ms 00:29:00.283 [2024-11-25 20:47:08.380704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.380736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.380747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:00.283 [2024-11-25 20:47:08.380758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:00.283 [2024-11-25 20:47:08.380768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.380804] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:00.283 [2024-11-25 20:47:08.380829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:00.283 [2024-11-25 20:47:08.380864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:00.283 [2024-11-25 20:47:08.380885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:00.283 [2024-11-25 20:47:08.380973] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:00.283 [2024-11-25 20:47:08.380986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:00.283 [2024-11-25 20:47:08.381015] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:00.283 [2024-11-25 20:47:08.381028] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381041] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:00.283 [2024-11-25 20:47:08.381079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:00.283 [2024-11-25 20:47:08.381090] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:00.283 [2024-11-25 20:47:08.381104] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:00.283 [2024-11-25 20:47:08.381114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.381124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:00.283 [2024-11-25 20:47:08.381136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:29:00.283 [2024-11-25 20:47:08.381145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.381216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.283 [2024-11-25 20:47:08.381228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:00.283 [2024-11-25 20:47:08.381238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:00.283 [2024-11-25 20:47:08.381248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.283 [2024-11-25 20:47:08.381348] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:00.283 [2024-11-25 20:47:08.381376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:00.283 [2024-11-25 20:47:08.381388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:00.283 [2024-11-25 20:47:08.381423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:00.283 [2024-11-25 20:47:08.381453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:00.283 [2024-11-25 20:47:08.381472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:00.283 [2024-11-25 20:47:08.381481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:00.283 [2024-11-25 20:47:08.381490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:00.283 [2024-11-25 20:47:08.381511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:00.283 [2024-11-25 20:47:08.381520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:00.283 [2024-11-25 20:47:08.381529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:00.283 [2024-11-25 20:47:08.381548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:00.283 [2024-11-25 20:47:08.381577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:00.283 [2024-11-25 20:47:08.381605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:00.283 [2024-11-25 20:47:08.381633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:00.283 [2024-11-25 20:47:08.381661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:00.283 [2024-11-25 20:47:08.381681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:00.283 [2024-11-25 20:47:08.381691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:00.283 [2024-11-25 20:47:08.381701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:00.284 [2024-11-25 20:47:08.381710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:00.284 [2024-11-25 20:47:08.381720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:00.284 [2024-11-25 20:47:08.381729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:00.284 [2024-11-25 20:47:08.381740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:00.284 [2024-11-25 20:47:08.381750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:00.284 [2024-11-25 20:47:08.381760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:00.284 [2024-11-25 20:47:08.381770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:00.284 [2024-11-25 20:47:08.381780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:00.284 [2024-11-25 20:47:08.381789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:00.284 [2024-11-25 20:47:08.381798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:00.284 [2024-11-25 20:47:08.381808] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:00.284 [2024-11-25 20:47:08.381819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:00.284 [2024-11-25 20:47:08.381829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:00.284 [2024-11-25 20:47:08.381840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:00.284 [2024-11-25 20:47:08.381850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:00.284 [2024-11-25 20:47:08.381860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:00.284 [2024-11-25 20:47:08.381868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:00.284 [2024-11-25 20:47:08.381877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:00.284 [2024-11-25 20:47:08.381886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:00.284 [2024-11-25 20:47:08.381896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:00.284 [2024-11-25 20:47:08.381907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:00.284 [2024-11-25 20:47:08.381920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.381936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:00.284 [2024-11-25 20:47:08.381947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:00.284 [2024-11-25 20:47:08.381958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:00.284 [2024-11-25 20:47:08.381969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:00.284 [2024-11-25 20:47:08.381979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:00.284 [2024-11-25 20:47:08.381990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:00.284 [2024-11-25 20:47:08.382000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:00.284 [2024-11-25 20:47:08.382010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:00.284 [2024-11-25 20:47:08.382020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:00.284 [2024-11-25 20:47:08.382031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.382041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.382051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.382061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.382073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:00.284 [2024-11-25 20:47:08.382084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:00.284 [2024-11-25 20:47:08.382095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.382106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:00.284 [2024-11-25 20:47:08.382116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:00.284 [2024-11-25 20:47:08.382126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:00.284 [2024-11-25 20:47:08.382136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:00.284 [2024-11-25 20:47:08.382146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.284 [2024-11-25 20:47:08.382157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:00.284 [2024-11-25 20:47:08.382167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:29:00.284 [2024-11-25 20:47:08.382177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.430648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.430686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:00.544 [2024-11-25 20:47:08.430700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.501 ms 00:29:00.544 [2024-11-25 20:47:08.430717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.430801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.430814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:00.544 [2024-11-25 20:47:08.430836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:00.544 [2024-11-25 20:47:08.430846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.493236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.493272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:00.544 [2024-11-25 20:47:08.493286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.429 ms 00:29:00.544 [2024-11-25 20:47:08.493296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.493344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.493356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:00.544 [2024-11-25 20:47:08.493373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:00.544 [2024-11-25 20:47:08.493383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.494244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.494260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:00.544 [2024-11-25 20:47:08.494272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:29:00.544 [2024-11-25 20:47:08.494282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.494427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.494442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:00.544 [2024-11-25 20:47:08.494460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:29:00.544 [2024-11-25 20:47:08.494470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.517486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.517519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:00.544 [2024-11-25 20:47:08.517538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:29:00.544 [2024-11-25 20:47:08.517549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.537231] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:00.544 [2024-11-25 20:47:08.537266] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:00.544 [2024-11-25 20:47:08.537281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.537293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:00.544 [2024-11-25 20:47:08.537304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.650 ms 00:29:00.544 [2024-11-25 20:47:08.537314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.565729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.565764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:00.544 [2024-11-25 20:47:08.565794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.401 ms 00:29:00.544 [2024-11-25 20:47:08.565805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.583178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.583211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:00.544 [2024-11-25 20:47:08.583224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.354 ms 00:29:00.544 [2024-11-25 20:47:08.583234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.600211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.600243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:00.544 [2024-11-25 20:47:08.600255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.967 ms 00:29:00.544 [2024-11-25 20:47:08.600265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.544 [2024-11-25 20:47:08.601023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.544 [2024-11-25 20:47:08.601051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:00.544 [2024-11-25 20:47:08.601068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:29:00.544 [2024-11-25 20:47:08.601080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.696760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.696824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:00.804 [2024-11-25 20:47:08.696849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.809 ms 00:29:00.804 [2024-11-25 20:47:08.696860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.707721] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:00.804 [2024-11-25 20:47:08.712539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.712576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:00.804 [2024-11-25 20:47:08.712593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.643 ms 00:29:00.804 [2024-11-25 20:47:08.712615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.712729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.712742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:00.804 [2024-11-25 20:47:08.712754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:00.804 [2024-11-25 20:47:08.712769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.714956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.714991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:00.804 [2024-11-25 20:47:08.715004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:29:00.804 [2024-11-25 20:47:08.715016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.715070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.715082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:00.804 [2024-11-25 20:47:08.715092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:00.804 [2024-11-25 20:47:08.715103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.715149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:00.804 [2024-11-25 20:47:08.715162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.715173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:00.804 [2024-11-25 20:47:08.715183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:00.804 [2024-11-25 20:47:08.715193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.752459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.752498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:00.804 [2024-11-25 20:47:08.752512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.304 ms 00:29:00.804 [2024-11-25 20:47:08.752531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.752618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.804 [2024-11-25 20:47:08.752631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:00.804 [2024-11-25 20:47:08.752642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:00.804 [2024-11-25 20:47:08.752652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.804 [2024-11-25 20:47:08.754702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 415.087 ms, result 0 00:29:02.182  [2024-11-25T20:47:11.255Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-25T20:47:12.194Z] Copying: 45/1024 [MB] (25 MBps) [2024-11-25T20:47:13.131Z] Copying: 71/1024 [MB] (25 MBps) [2024-11-25T20:47:14.067Z] Copying: 96/1024 [MB] (25 MBps) [2024-11-25T20:47:15.005Z] Copying: 121/1024 [MB] (25 MBps) [2024-11-25T20:47:15.987Z] Copying: 147/1024 [MB] (25 MBps) [2024-11-25T20:47:17.363Z] Copying: 173/1024 [MB] (26 MBps) [2024-11-25T20:47:18.300Z] Copying: 199/1024 [MB] (25 MBps) [2024-11-25T20:47:19.238Z] Copying: 225/1024 [MB] (25 MBps) [2024-11-25T20:47:20.175Z] Copying: 251/1024 [MB] (26 MBps) [2024-11-25T20:47:21.112Z] Copying: 277/1024 [MB] (26 MBps) [2024-11-25T20:47:22.051Z] Copying: 304/1024 [MB] (26 MBps) [2024-11-25T20:47:22.991Z] Copying: 330/1024 [MB] (26 MBps) [2024-11-25T20:47:24.369Z] Copying: 356/1024 [MB] (26 MBps) [2024-11-25T20:47:25.307Z] Copying: 382/1024 [MB] (26 MBps) [2024-11-25T20:47:26.245Z] Copying: 408/1024 [MB] (26 MBps) [2024-11-25T20:47:27.183Z] Copying: 434/1024 [MB] (26 MBps) [2024-11-25T20:47:28.121Z] Copying: 461/1024 [MB] (26 MBps) [2024-11-25T20:47:29.059Z] Copying: 487/1024 [MB] (26 MBps) [2024-11-25T20:47:29.996Z] Copying: 514/1024 [MB] (26 MBps) [2024-11-25T20:47:31.374Z] Copying: 540/1024 [MB] (26 MBps) [2024-11-25T20:47:32.312Z] Copying: 566/1024 [MB] (25 MBps) [2024-11-25T20:47:33.248Z] Copying: 592/1024 [MB] (26 MBps) [2024-11-25T20:47:34.185Z] Copying: 619/1024 [MB] (26 MBps) [2024-11-25T20:47:35.122Z] Copying: 645/1024 [MB] (26 MBps) [2024-11-25T20:47:36.061Z] Copying: 671/1024 [MB] (26 MBps) [2024-11-25T20:47:36.998Z] Copying: 697/1024 [MB] (26 MBps) [2024-11-25T20:47:37.937Z] Copying: 724/1024 [MB] (26 MBps) [2024-11-25T20:47:39.327Z] Copying: 750/1024 [MB] (26 MBps) [2024-11-25T20:47:40.265Z] Copying: 777/1024 [MB] (26 MBps) [2024-11-25T20:47:41.203Z] Copying: 804/1024 [MB] (26 MBps) [2024-11-25T20:47:42.137Z] Copying: 831/1024 [MB] (27 MBps) [2024-11-25T20:47:43.136Z] Copying: 858/1024 [MB] (26 MBps) [2024-11-25T20:47:44.092Z] Copying: 884/1024 [MB] (26 MBps) [2024-11-25T20:47:45.027Z] Copying: 911/1024 [MB] (26 MBps) [2024-11-25T20:47:45.965Z] Copying: 938/1024 [MB] (26 MBps) [2024-11-25T20:47:47.343Z] Copying: 964/1024 [MB] (26 MBps) [2024-11-25T20:47:48.282Z] Copying: 991/1024 [MB] (26 MBps) [2024-11-25T20:47:48.282Z] Copying: 1017/1024 [MB] (26 MBps) [2024-11-25T20:47:48.850Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-25 20:47:48.608772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.608876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:40.714 [2024-11-25 20:47:48.608905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:40.714 [2024-11-25 20:47:48.608930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.608969] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:40.714 [2024-11-25 20:47:48.613909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.613968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:40.714 [2024-11-25 20:47:48.613990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:29:40.714 [2024-11-25 20:47:48.614007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.614310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.614346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:40.714 [2024-11-25 20:47:48.614364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:29:40.714 [2024-11-25 20:47:48.614386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.619935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.619984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:40.714 [2024-11-25 20:47:48.619999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.529 ms 00:29:40.714 [2024-11-25 20:47:48.620011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.626171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.626218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:40.714 [2024-11-25 20:47:48.626232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:29:40.714 [2024-11-25 20:47:48.626248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.662509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.662547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:40.714 [2024-11-25 20:47:48.662578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.927 ms 00:29:40.714 [2024-11-25 20:47:48.662589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.681768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.681811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:40.714 [2024-11-25 20:47:48.681842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.170 ms 00:29:40.714 [2024-11-25 20:47:48.681853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.714 [2024-11-25 20:47:48.831038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.714 [2024-11-25 20:47:48.831081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:40.714 [2024-11-25 20:47:48.831096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 149.382 ms 00:29:40.714 [2024-11-25 20:47:48.831108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.974 [2024-11-25 20:47:48.869163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.974 [2024-11-25 20:47:48.869202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:40.974 [2024-11-25 20:47:48.869217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.098 ms 00:29:40.974 [2024-11-25 20:47:48.869228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.974 [2024-11-25 20:47:48.904816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.975 [2024-11-25 20:47:48.904852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:40.975 [2024-11-25 20:47:48.904882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.607 ms 00:29:40.975 [2024-11-25 20:47:48.904893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.975 [2024-11-25 20:47:48.939538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.975 [2024-11-25 20:47:48.939571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:40.975 [2024-11-25 20:47:48.939600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.663 ms 00:29:40.975 [2024-11-25 20:47:48.939611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.975 [2024-11-25 20:47:48.974114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.975 [2024-11-25 20:47:48.974151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:40.975 [2024-11-25 20:47:48.974165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.477 ms 00:29:40.975 [2024-11-25 20:47:48.974175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.975 [2024-11-25 20:47:48.974213] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:40.975 [2024-11-25 20:47:48.974231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:40.975 [2024-11-25 20:47:48.974245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.974992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:40.975 [2024-11-25 20:47:48.975101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:40.976 [2024-11-25 20:47:48.975355] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:40.976 [2024-11-25 20:47:48.975366] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 784b2adc-fa29-40f2-b6cc-6376a96317b8 00:29:40.976 [2024-11-25 20:47:48.975378] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:40.976 [2024-11-25 20:47:48.975388] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35008 00:29:40.976 [2024-11-25 20:47:48.975399] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34048 00:29:40.976 [2024-11-25 20:47:48.975410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0282 00:29:40.976 [2024-11-25 20:47:48.975420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:40.976 [2024-11-25 20:47:48.975449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:40.976 [2024-11-25 20:47:48.975460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:40.976 [2024-11-25 20:47:48.975470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:40.976 [2024-11-25 20:47:48.975479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:40.976 [2024-11-25 20:47:48.975489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.976 [2024-11-25 20:47:48.975500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:40.976 [2024-11-25 20:47:48.975510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:29:40.976 [2024-11-25 20:47:48.975521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.976 [2024-11-25 20:47:48.996387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.976 [2024-11-25 20:47:48.996419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:40.976 [2024-11-25 20:47:48.996433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.863 ms 00:29:40.976 [2024-11-25 20:47:48.996450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.976 [2024-11-25 20:47:48.997070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.976 [2024-11-25 20:47:48.997091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:40.976 [2024-11-25 20:47:48.997104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:29:40.976 [2024-11-25 20:47:48.997115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.976 [2024-11-25 20:47:49.051376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.976 [2024-11-25 20:47:49.051419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.976 [2024-11-25 20:47:49.051433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.976 [2024-11-25 20:47:49.051460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.976 [2024-11-25 20:47:49.051524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.976 [2024-11-25 20:47:49.051536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.976 [2024-11-25 20:47:49.051547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.976 [2024-11-25 20:47:49.051558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.976 [2024-11-25 20:47:49.051644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.976 [2024-11-25 20:47:49.051658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.976 [2024-11-25 20:47:49.051674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.976 [2024-11-25 20:47:49.051684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.976 [2024-11-25 20:47:49.051703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.976 [2024-11-25 20:47:49.051730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.976 [2024-11-25 20:47:49.051741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.976 [2024-11-25 20:47:49.051751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.242 [2024-11-25 20:47:49.183179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.242 [2024-11-25 20:47:49.183262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:41.242 [2024-11-25 20:47:49.183279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.242 [2024-11-25 20:47:49.183306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.242 [2024-11-25 20:47:49.285666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.242 [2024-11-25 20:47:49.285760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:41.242 [2024-11-25 20:47:49.285777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.242 [2024-11-25 20:47:49.285789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.242 [2024-11-25 20:47:49.285902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.242 [2024-11-25 20:47:49.285915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:41.242 [2024-11-25 20:47:49.285926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.242 [2024-11-25 20:47:49.285943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.242 [2024-11-25 20:47:49.286000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.242 [2024-11-25 20:47:49.286013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:41.242 [2024-11-25 20:47:49.286024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.242 [2024-11-25 20:47:49.286035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.242 [2024-11-25 20:47:49.286172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.242 [2024-11-25 20:47:49.286186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:41.242 [2024-11-25 20:47:49.286197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.242 [2024-11-25 20:47:49.286208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.242 [2024-11-25 20:47:49.286257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.242 [2024-11-25 20:47:49.286270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:41.242 [2024-11-25 20:47:49.286281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.243 [2024-11-25 20:47:49.286291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.243 [2024-11-25 20:47:49.286338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.243 [2024-11-25 20:47:49.286369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:41.243 [2024-11-25 20:47:49.286380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.243 [2024-11-25 20:47:49.286391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.243 [2024-11-25 20:47:49.286449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:41.243 [2024-11-25 20:47:49.286471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:41.243 [2024-11-25 20:47:49.286482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:41.243 [2024-11-25 20:47:49.286493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.243 [2024-11-25 20:47:49.286645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 678.941 ms, result 0 00:29:42.628 00:29:42.628 00:29:42.628 20:47:50 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:44.007 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:44.007 20:47:52 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:44.007 20:47:52 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:44.007 20:47:52 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79414 00:29:44.267 20:47:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79414 ']' 00:29:44.267 20:47:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79414 00:29:44.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79414) - No such process 00:29:44.267 Process with pid 79414 is not found 00:29:44.267 20:47:52 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79414 is not found' 00:29:44.267 Remove shared memory files 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:44.267 20:47:52 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:44.267 00:29:44.267 real 3m16.723s 00:29:44.267 user 3m3.215s 00:29:44.267 sys 0m14.922s 00:29:44.267 20:47:52 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.267 ************************************ 00:29:44.267 END TEST ftl_restore 00:29:44.267 ************************************ 00:29:44.267 20:47:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:44.267 20:47:52 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:44.267 20:47:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:44.267 20:47:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.267 20:47:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:44.267 ************************************ 00:29:44.267 START TEST ftl_dirty_shutdown 00:29:44.267 ************************************ 00:29:44.267 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:44.528 * Looking for test storage... 00:29:44.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.528 --rc genhtml_branch_coverage=1 00:29:44.528 --rc genhtml_function_coverage=1 00:29:44.528 --rc genhtml_legend=1 00:29:44.528 --rc geninfo_all_blocks=1 00:29:44.528 --rc geninfo_unexecuted_blocks=1 00:29:44.528 00:29:44.528 ' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.528 --rc genhtml_branch_coverage=1 00:29:44.528 --rc genhtml_function_coverage=1 00:29:44.528 --rc genhtml_legend=1 00:29:44.528 --rc geninfo_all_blocks=1 00:29:44.528 --rc geninfo_unexecuted_blocks=1 00:29:44.528 00:29:44.528 ' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.528 --rc genhtml_branch_coverage=1 00:29:44.528 --rc genhtml_function_coverage=1 00:29:44.528 --rc genhtml_legend=1 00:29:44.528 --rc geninfo_all_blocks=1 00:29:44.528 --rc geninfo_unexecuted_blocks=1 00:29:44.528 00:29:44.528 ' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.528 --rc genhtml_branch_coverage=1 00:29:44.528 --rc genhtml_function_coverage=1 00:29:44.528 --rc genhtml_legend=1 00:29:44.528 --rc geninfo_all_blocks=1 00:29:44.528 --rc geninfo_unexecuted_blocks=1 00:29:44.528 00:29:44.528 ' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:44.528 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81513 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81513 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81513 ']' 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.529 20:47:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.788 [2024-11-25 20:47:52.754243] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:29:44.788 [2024-11-25 20:47:52.754399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81513 ] 00:29:45.048 [2024-11-25 20:47:52.937137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.048 [2024-11-25 20:47:53.072191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:45.987 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:46.556 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:46.556 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:46.556 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:46.556 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:46.557 { 00:29:46.557 "name": "nvme0n1", 00:29:46.557 "aliases": [ 00:29:46.557 "20e6fb0c-9b7a-40e9-8007-47cc7d13b044" 00:29:46.557 ], 00:29:46.557 "product_name": "NVMe disk", 00:29:46.557 "block_size": 4096, 00:29:46.557 "num_blocks": 1310720, 00:29:46.557 "uuid": "20e6fb0c-9b7a-40e9-8007-47cc7d13b044", 00:29:46.557 "numa_id": -1, 00:29:46.557 "assigned_rate_limits": { 00:29:46.557 "rw_ios_per_sec": 0, 00:29:46.557 "rw_mbytes_per_sec": 0, 00:29:46.557 "r_mbytes_per_sec": 0, 00:29:46.557 "w_mbytes_per_sec": 0 00:29:46.557 }, 00:29:46.557 "claimed": true, 00:29:46.557 "claim_type": "read_many_write_one", 00:29:46.557 "zoned": false, 00:29:46.557 "supported_io_types": { 00:29:46.557 "read": true, 00:29:46.557 "write": true, 00:29:46.557 "unmap": true, 00:29:46.557 "flush": true, 00:29:46.557 "reset": true, 00:29:46.557 "nvme_admin": true, 00:29:46.557 "nvme_io": true, 00:29:46.557 "nvme_io_md": false, 00:29:46.557 "write_zeroes": true, 00:29:46.557 "zcopy": false, 00:29:46.557 "get_zone_info": false, 00:29:46.557 "zone_management": false, 00:29:46.557 "zone_append": false, 00:29:46.557 "compare": true, 00:29:46.557 "compare_and_write": false, 00:29:46.557 "abort": true, 00:29:46.557 "seek_hole": false, 00:29:46.557 "seek_data": false, 00:29:46.557 "copy": true, 00:29:46.557 "nvme_iov_md": false 00:29:46.557 }, 00:29:46.557 "driver_specific": { 00:29:46.557 "nvme": [ 00:29:46.557 { 00:29:46.557 "pci_address": "0000:00:11.0", 00:29:46.557 "trid": { 00:29:46.557 "trtype": "PCIe", 00:29:46.557 "traddr": "0000:00:11.0" 00:29:46.557 }, 00:29:46.557 "ctrlr_data": { 00:29:46.557 "cntlid": 0, 00:29:46.557 "vendor_id": "0x1b36", 00:29:46.557 "model_number": "QEMU NVMe Ctrl", 00:29:46.557 "serial_number": "12341", 00:29:46.557 "firmware_revision": "8.0.0", 00:29:46.557 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:46.557 "oacs": { 00:29:46.557 "security": 0, 00:29:46.557 "format": 1, 00:29:46.557 "firmware": 0, 00:29:46.557 "ns_manage": 1 00:29:46.557 }, 00:29:46.557 "multi_ctrlr": false, 00:29:46.557 "ana_reporting": false 00:29:46.557 }, 00:29:46.557 "vs": { 00:29:46.557 "nvme_version": "1.4" 00:29:46.557 }, 00:29:46.557 "ns_data": { 00:29:46.557 "id": 1, 00:29:46.557 "can_share": false 00:29:46.557 } 00:29:46.557 } 00:29:46.557 ], 00:29:46.557 "mp_policy": "active_passive" 00:29:46.557 } 00:29:46.557 } 00:29:46.557 ]' 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:46.557 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:46.816 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=e39689e2-2a05-4b5f-bc52-e5c1f53e60f4 00:29:46.816 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:46.816 20:47:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e39689e2-2a05-4b5f-bc52-e5c1f53e60f4 00:29:47.075 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:47.334 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=81ff69c0-3707-4316-a4bf-e83025509f3e 00:29:47.335 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 81ff69c0-3707-4316-a4bf-e83025509f3e 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=660caf28-6022-4e27-8621-9e3002ad3938 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 660caf28-6022-4e27-8621-9e3002ad3938 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=660caf28-6022-4e27-8621-9e3002ad3938 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 660caf28-6022-4e27-8621-9e3002ad3938 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=660caf28-6022-4e27-8621-9e3002ad3938 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:47.594 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 660caf28-6022-4e27-8621-9e3002ad3938 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:47.853 { 00:29:47.853 "name": "660caf28-6022-4e27-8621-9e3002ad3938", 00:29:47.853 "aliases": [ 00:29:47.853 "lvs/nvme0n1p0" 00:29:47.853 ], 00:29:47.853 "product_name": "Logical Volume", 00:29:47.853 "block_size": 4096, 00:29:47.853 "num_blocks": 26476544, 00:29:47.853 "uuid": "660caf28-6022-4e27-8621-9e3002ad3938", 00:29:47.853 "assigned_rate_limits": { 00:29:47.853 "rw_ios_per_sec": 0, 00:29:47.853 "rw_mbytes_per_sec": 0, 00:29:47.853 "r_mbytes_per_sec": 0, 00:29:47.853 "w_mbytes_per_sec": 0 00:29:47.853 }, 00:29:47.853 "claimed": false, 00:29:47.853 "zoned": false, 00:29:47.853 "supported_io_types": { 00:29:47.853 "read": true, 00:29:47.853 "write": true, 00:29:47.853 "unmap": true, 00:29:47.853 "flush": false, 00:29:47.853 "reset": true, 00:29:47.853 "nvme_admin": false, 00:29:47.853 "nvme_io": false, 00:29:47.853 "nvme_io_md": false, 00:29:47.853 "write_zeroes": true, 00:29:47.853 "zcopy": false, 00:29:47.853 "get_zone_info": false, 00:29:47.853 "zone_management": false, 00:29:47.853 "zone_append": false, 00:29:47.853 "compare": false, 00:29:47.853 "compare_and_write": false, 00:29:47.853 "abort": false, 00:29:47.853 "seek_hole": true, 00:29:47.853 "seek_data": true, 00:29:47.853 "copy": false, 00:29:47.853 "nvme_iov_md": false 00:29:47.853 }, 00:29:47.853 "driver_specific": { 00:29:47.853 "lvol": { 00:29:47.853 "lvol_store_uuid": "81ff69c0-3707-4316-a4bf-e83025509f3e", 00:29:47.853 "base_bdev": "nvme0n1", 00:29:47.853 "thin_provision": true, 00:29:47.853 "num_allocated_clusters": 0, 00:29:47.853 "snapshot": false, 00:29:47.853 "clone": false, 00:29:47.853 "esnap_clone": false 00:29:47.853 } 00:29:47.853 } 00:29:47.853 } 00:29:47.853 ]' 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:47.853 20:47:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:48.112 20:47:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:48.112 20:47:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:48.112 20:47:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 660caf28-6022-4e27-8621-9e3002ad3938 00:29:48.112 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=660caf28-6022-4e27-8621-9e3002ad3938 00:29:48.113 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:48.113 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:48.113 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:48.113 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 660caf28-6022-4e27-8621-9e3002ad3938 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:48.372 { 00:29:48.372 "name": "660caf28-6022-4e27-8621-9e3002ad3938", 00:29:48.372 "aliases": [ 00:29:48.372 "lvs/nvme0n1p0" 00:29:48.372 ], 00:29:48.372 "product_name": "Logical Volume", 00:29:48.372 "block_size": 4096, 00:29:48.372 "num_blocks": 26476544, 00:29:48.372 "uuid": "660caf28-6022-4e27-8621-9e3002ad3938", 00:29:48.372 "assigned_rate_limits": { 00:29:48.372 "rw_ios_per_sec": 0, 00:29:48.372 "rw_mbytes_per_sec": 0, 00:29:48.372 "r_mbytes_per_sec": 0, 00:29:48.372 "w_mbytes_per_sec": 0 00:29:48.372 }, 00:29:48.372 "claimed": false, 00:29:48.372 "zoned": false, 00:29:48.372 "supported_io_types": { 00:29:48.372 "read": true, 00:29:48.372 "write": true, 00:29:48.372 "unmap": true, 00:29:48.372 "flush": false, 00:29:48.372 "reset": true, 00:29:48.372 "nvme_admin": false, 00:29:48.372 "nvme_io": false, 00:29:48.372 "nvme_io_md": false, 00:29:48.372 "write_zeroes": true, 00:29:48.372 "zcopy": false, 00:29:48.372 "get_zone_info": false, 00:29:48.372 "zone_management": false, 00:29:48.372 "zone_append": false, 00:29:48.372 "compare": false, 00:29:48.372 "compare_and_write": false, 00:29:48.372 "abort": false, 00:29:48.372 "seek_hole": true, 00:29:48.372 "seek_data": true, 00:29:48.372 "copy": false, 00:29:48.372 "nvme_iov_md": false 00:29:48.372 }, 00:29:48.372 "driver_specific": { 00:29:48.372 "lvol": { 00:29:48.372 "lvol_store_uuid": "81ff69c0-3707-4316-a4bf-e83025509f3e", 00:29:48.372 "base_bdev": "nvme0n1", 00:29:48.372 "thin_provision": true, 00:29:48.372 "num_allocated_clusters": 0, 00:29:48.372 "snapshot": false, 00:29:48.372 "clone": false, 00:29:48.372 "esnap_clone": false 00:29:48.372 } 00:29:48.372 } 00:29:48.372 } 00:29:48.372 ]' 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:48.372 20:47:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 660caf28-6022-4e27-8621-9e3002ad3938 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=660caf28-6022-4e27-8621-9e3002ad3938 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:48.632 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 660caf28-6022-4e27-8621-9e3002ad3938 00:29:48.891 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:48.891 { 00:29:48.891 "name": "660caf28-6022-4e27-8621-9e3002ad3938", 00:29:48.891 "aliases": [ 00:29:48.891 "lvs/nvme0n1p0" 00:29:48.891 ], 00:29:48.891 "product_name": "Logical Volume", 00:29:48.891 "block_size": 4096, 00:29:48.891 "num_blocks": 26476544, 00:29:48.891 "uuid": "660caf28-6022-4e27-8621-9e3002ad3938", 00:29:48.891 "assigned_rate_limits": { 00:29:48.891 "rw_ios_per_sec": 0, 00:29:48.891 "rw_mbytes_per_sec": 0, 00:29:48.891 "r_mbytes_per_sec": 0, 00:29:48.891 "w_mbytes_per_sec": 0 00:29:48.891 }, 00:29:48.891 "claimed": false, 00:29:48.891 "zoned": false, 00:29:48.891 "supported_io_types": { 00:29:48.891 "read": true, 00:29:48.891 "write": true, 00:29:48.891 "unmap": true, 00:29:48.891 "flush": false, 00:29:48.891 "reset": true, 00:29:48.891 "nvme_admin": false, 00:29:48.891 "nvme_io": false, 00:29:48.891 "nvme_io_md": false, 00:29:48.891 "write_zeroes": true, 00:29:48.891 "zcopy": false, 00:29:48.891 "get_zone_info": false, 00:29:48.891 "zone_management": false, 00:29:48.891 "zone_append": false, 00:29:48.891 "compare": false, 00:29:48.892 "compare_and_write": false, 00:29:48.892 "abort": false, 00:29:48.892 "seek_hole": true, 00:29:48.892 "seek_data": true, 00:29:48.892 "copy": false, 00:29:48.892 "nvme_iov_md": false 00:29:48.892 }, 00:29:48.892 "driver_specific": { 00:29:48.892 "lvol": { 00:29:48.892 "lvol_store_uuid": "81ff69c0-3707-4316-a4bf-e83025509f3e", 00:29:48.892 "base_bdev": "nvme0n1", 00:29:48.892 "thin_provision": true, 00:29:48.892 "num_allocated_clusters": 0, 00:29:48.892 "snapshot": false, 00:29:48.892 "clone": false, 00:29:48.892 "esnap_clone": false 00:29:48.892 } 00:29:48.892 } 00:29:48.892 } 00:29:48.892 ]' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 660caf28-6022-4e27-8621-9e3002ad3938 --l2p_dram_limit 10' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:48.892 20:47:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 660caf28-6022-4e27-8621-9e3002ad3938 --l2p_dram_limit 10 -c nvc0n1p0 00:29:49.152 [2024-11-25 20:47:57.040916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.152 [2024-11-25 20:47:57.040977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:49.152 [2024-11-25 20:47:57.041000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:49.153 [2024-11-25 20:47:57.041028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.041111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.041124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.153 [2024-11-25 20:47:57.041138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:49.153 [2024-11-25 20:47:57.041148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.041182] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:49.153 [2024-11-25 20:47:57.042383] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:49.153 [2024-11-25 20:47:57.042424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.042437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.153 [2024-11-25 20:47:57.042452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:29:49.153 [2024-11-25 20:47:57.042463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.042556] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 95033dac-05e9-4434-8c8c-ed39204e32c8 00:29:49.153 [2024-11-25 20:47:57.044909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.044947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:49.153 [2024-11-25 20:47:57.044961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:49.153 [2024-11-25 20:47:57.044977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.058483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.058524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.153 [2024-11-25 20:47:57.058538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.455 ms 00:29:49.153 [2024-11-25 20:47:57.058551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.058683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.058702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.153 [2024-11-25 20:47:57.058714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:29:49.153 [2024-11-25 20:47:57.058733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.058807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.058823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:49.153 [2024-11-25 20:47:57.058837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:49.153 [2024-11-25 20:47:57.058850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.058879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:49.153 [2024-11-25 20:47:57.064413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.064442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.153 [2024-11-25 20:47:57.064475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.550 ms 00:29:49.153 [2024-11-25 20:47:57.064486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.064528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.064539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:49.153 [2024-11-25 20:47:57.064553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:49.153 [2024-11-25 20:47:57.064564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.064603] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:49.153 [2024-11-25 20:47:57.064739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:49.153 [2024-11-25 20:47:57.064761] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:49.153 [2024-11-25 20:47:57.064776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:49.153 [2024-11-25 20:47:57.064792] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:49.153 [2024-11-25 20:47:57.064821] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:49.153 [2024-11-25 20:47:57.064836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:49.153 [2024-11-25 20:47:57.064847] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:49.153 [2024-11-25 20:47:57.064864] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:49.153 [2024-11-25 20:47:57.064874] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:49.153 [2024-11-25 20:47:57.064888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.064911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:49.153 [2024-11-25 20:47:57.064926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:29:49.153 [2024-11-25 20:47:57.064936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.065015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.153 [2024-11-25 20:47:57.065027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:49.153 [2024-11-25 20:47:57.065041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:49.153 [2024-11-25 20:47:57.065051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.153 [2024-11-25 20:47:57.065155] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:49.153 [2024-11-25 20:47:57.065177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:49.153 [2024-11-25 20:47:57.065191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:49.153 [2024-11-25 20:47:57.065226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:49.153 [2024-11-25 20:47:57.065262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.153 [2024-11-25 20:47:57.065283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:49.153 [2024-11-25 20:47:57.065293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:49.153 [2024-11-25 20:47:57.065306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.153 [2024-11-25 20:47:57.065316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:49.153 [2024-11-25 20:47:57.065338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:49.153 [2024-11-25 20:47:57.065348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:49.153 [2024-11-25 20:47:57.065374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:49.153 [2024-11-25 20:47:57.065412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:49.153 [2024-11-25 20:47:57.065445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:49.153 [2024-11-25 20:47:57.065479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:49.153 [2024-11-25 20:47:57.065510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.153 [2024-11-25 20:47:57.065532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:49.153 [2024-11-25 20:47:57.065547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.153 [2024-11-25 20:47:57.065569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:49.153 [2024-11-25 20:47:57.065579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:49.153 [2024-11-25 20:47:57.065591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.153 [2024-11-25 20:47:57.065601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:49.153 [2024-11-25 20:47:57.065612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:49.153 [2024-11-25 20:47:57.065622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.153 [2024-11-25 20:47:57.065634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:49.154 [2024-11-25 20:47:57.065644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:49.154 [2024-11-25 20:47:57.065656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.154 [2024-11-25 20:47:57.065665] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:49.154 [2024-11-25 20:47:57.065680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:49.154 [2024-11-25 20:47:57.065692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.154 [2024-11-25 20:47:57.065715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.154 [2024-11-25 20:47:57.065727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:49.154 [2024-11-25 20:47:57.065742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:49.154 [2024-11-25 20:47:57.065752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:49.154 [2024-11-25 20:47:57.065765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:49.154 [2024-11-25 20:47:57.065774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:49.154 [2024-11-25 20:47:57.065787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:49.154 [2024-11-25 20:47:57.065803] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:49.154 [2024-11-25 20:47:57.065823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.065836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:49.154 [2024-11-25 20:47:57.065849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:49.154 [2024-11-25 20:47:57.065860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:49.154 [2024-11-25 20:47:57.065874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:49.154 [2024-11-25 20:47:57.065885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:49.154 [2024-11-25 20:47:57.065898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:49.154 [2024-11-25 20:47:57.065909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:49.154 [2024-11-25 20:47:57.065923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:49.154 [2024-11-25 20:47:57.065933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:49.154 [2024-11-25 20:47:57.065950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.065961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.065974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.065985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.066000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:49.154 [2024-11-25 20:47:57.066010] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:49.154 [2024-11-25 20:47:57.066025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.066037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.154 [2024-11-25 20:47:57.066050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:49.154 [2024-11-25 20:47:57.066060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:49.154 [2024-11-25 20:47:57.066074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:49.154 [2024-11-25 20:47:57.066085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.154 [2024-11-25 20:47:57.066101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:49.154 [2024-11-25 20:47:57.066113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:29:49.154 [2024-11-25 20:47:57.066126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.154 [2024-11-25 20:47:57.066173] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:49.154 [2024-11-25 20:47:57.066193] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:53.347 [2024-11-25 20:48:00.647826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.647912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:53.348 [2024-11-25 20:48:00.647934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3587.461 ms 00:29:53.348 [2024-11-25 20:48:00.647949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.694327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.694406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:53.348 [2024-11-25 20:48:00.694426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.008 ms 00:29:53.348 [2024-11-25 20:48:00.694440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.694581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.694599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:53.348 [2024-11-25 20:48:00.694612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:53.348 [2024-11-25 20:48:00.694635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.744537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.744600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.348 [2024-11-25 20:48:00.744617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.938 ms 00:29:53.348 [2024-11-25 20:48:00.744633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.744676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.744691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.348 [2024-11-25 20:48:00.744703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:53.348 [2024-11-25 20:48:00.744729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.745569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.745598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.348 [2024-11-25 20:48:00.745610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:29:53.348 [2024-11-25 20:48:00.745623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.745760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.745780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.348 [2024-11-25 20:48:00.745796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:53.348 [2024-11-25 20:48:00.745813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.771213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.771262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.348 [2024-11-25 20:48:00.771277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.414 ms 00:29:53.348 [2024-11-25 20:48:00.771308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.784671] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:53.348 [2024-11-25 20:48:00.789930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.789961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:53.348 [2024-11-25 20:48:00.789978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.549 ms 00:29:53.348 [2024-11-25 20:48:00.790005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.892949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.893011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:53.348 [2024-11-25 20:48:00.893034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.067 ms 00:29:53.348 [2024-11-25 20:48:00.893047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.893264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.893282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:53.348 [2024-11-25 20:48:00.893302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:29:53.348 [2024-11-25 20:48:00.893314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.930633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.930676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:53.348 [2024-11-25 20:48:00.930695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.309 ms 00:29:53.348 [2024-11-25 20:48:00.930707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.965172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.965211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:53.348 [2024-11-25 20:48:00.965228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.471 ms 00:29:53.348 [2024-11-25 20:48:00.965254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:00.966023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:00.966048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:53.348 [2024-11-25 20:48:00.966064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:29:53.348 [2024-11-25 20:48:00.966078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.066390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:01.066437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:53.348 [2024-11-25 20:48:01.066463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.413 ms 00:29:53.348 [2024-11-25 20:48:01.066474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.105604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:01.105648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:53.348 [2024-11-25 20:48:01.105666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.100 ms 00:29:53.348 [2024-11-25 20:48:01.105678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.142457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:01.142496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:53.348 [2024-11-25 20:48:01.142514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.785 ms 00:29:53.348 [2024-11-25 20:48:01.142525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.178945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:01.178982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:53.348 [2024-11-25 20:48:01.179001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.431 ms 00:29:53.348 [2024-11-25 20:48:01.179028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.179086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:01.179110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:53.348 [2024-11-25 20:48:01.179130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:53.348 [2024-11-25 20:48:01.179141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.179273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.348 [2024-11-25 20:48:01.179290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:53.348 [2024-11-25 20:48:01.179305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:53.348 [2024-11-25 20:48:01.179316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.348 [2024-11-25 20:48:01.180757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4146.039 ms, result 0 00:29:53.348 { 00:29:53.348 "name": "ftl0", 00:29:53.348 "uuid": "95033dac-05e9-4434-8c8c-ed39204e32c8" 00:29:53.348 } 00:29:53.348 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:53.348 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:53.348 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:53.348 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:53.348 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:53.608 /dev/nbd0 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:53.608 1+0 records in 00:29:53.608 1+0 records out 00:29:53.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789485 s, 5.2 MB/s 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:53.608 20:48:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:53.867 [2024-11-25 20:48:01.769878] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:29:53.867 [2024-11-25 20:48:01.770006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81658 ] 00:29:53.867 [2024-11-25 20:48:01.957237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.129 [2024-11-25 20:48:02.128041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.507  [2024-11-25T20:48:04.578Z] Copying: 201/1024 [MB] (201 MBps) [2024-11-25T20:48:05.514Z] Copying: 404/1024 [MB] (202 MBps) [2024-11-25T20:48:06.893Z] Copying: 605/1024 [MB] (201 MBps) [2024-11-25T20:48:07.832Z] Copying: 804/1024 [MB] (199 MBps) [2024-11-25T20:48:07.832Z] Copying: 1001/1024 [MB] (196 MBps) [2024-11-25T20:48:09.209Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:30:01.073 00:30:01.073 20:48:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:02.977 20:48:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:30:02.977 [2024-11-25 20:48:10.706481] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:30:02.977 [2024-11-25 20:48:10.706622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81751 ] 00:30:02.977 [2024-11-25 20:48:10.891984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.977 [2024-11-25 20:48:11.037898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.407  [2024-11-25T20:48:13.479Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-25T20:48:14.415Z] Copying: 32/1024 [MB] (16 MBps) [2024-11-25T20:48:15.792Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-25T20:48:16.728Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-25T20:48:17.664Z] Copying: 82/1024 [MB] (16 MBps) [2024-11-25T20:48:18.601Z] Copying: 99/1024 [MB] (16 MBps) [2024-11-25T20:48:19.539Z] Copying: 116/1024 [MB] (16 MBps) [2024-11-25T20:48:20.475Z] Copying: 133/1024 [MB] (16 MBps) [2024-11-25T20:48:21.413Z] Copying: 149/1024 [MB] (16 MBps) [2024-11-25T20:48:22.792Z] Copying: 166/1024 [MB] (16 MBps) [2024-11-25T20:48:23.730Z] Copying: 183/1024 [MB] (16 MBps) [2024-11-25T20:48:24.667Z] Copying: 200/1024 [MB] (16 MBps) [2024-11-25T20:48:25.627Z] Copying: 216/1024 [MB] (16 MBps) [2024-11-25T20:48:26.619Z] Copying: 231/1024 [MB] (15 MBps) [2024-11-25T20:48:27.563Z] Copying: 248/1024 [MB] (16 MBps) [2024-11-25T20:48:28.500Z] Copying: 264/1024 [MB] (16 MBps) [2024-11-25T20:48:29.439Z] Copying: 281/1024 [MB] (16 MBps) [2024-11-25T20:48:30.815Z] Copying: 298/1024 [MB] (16 MBps) [2024-11-25T20:48:31.751Z] Copying: 314/1024 [MB] (16 MBps) [2024-11-25T20:48:32.687Z] Copying: 330/1024 [MB] (15 MBps) [2024-11-25T20:48:33.624Z] Copying: 345/1024 [MB] (15 MBps) [2024-11-25T20:48:34.562Z] Copying: 361/1024 [MB] (15 MBps) [2024-11-25T20:48:35.499Z] Copying: 376/1024 [MB] (15 MBps) [2024-11-25T20:48:36.437Z] Copying: 391/1024 [MB] (15 MBps) [2024-11-25T20:48:37.815Z] Copying: 407/1024 [MB] (15 MBps) [2024-11-25T20:48:38.381Z] Copying: 422/1024 [MB] (15 MBps) [2024-11-25T20:48:39.766Z] Copying: 438/1024 [MB] (15 MBps) [2024-11-25T20:48:40.733Z] Copying: 453/1024 [MB] (15 MBps) [2024-11-25T20:48:41.670Z] Copying: 469/1024 [MB] (15 MBps) [2024-11-25T20:48:42.609Z] Copying: 483/1024 [MB] (14 MBps) [2024-11-25T20:48:43.546Z] Copying: 499/1024 [MB] (15 MBps) [2024-11-25T20:48:44.483Z] Copying: 514/1024 [MB] (15 MBps) [2024-11-25T20:48:45.419Z] Copying: 529/1024 [MB] (15 MBps) [2024-11-25T20:48:46.797Z] Copying: 544/1024 [MB] (15 MBps) [2024-11-25T20:48:47.365Z] Copying: 560/1024 [MB] (15 MBps) [2024-11-25T20:48:48.741Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-25T20:48:49.678Z] Copying: 590/1024 [MB] (15 MBps) [2024-11-25T20:48:50.615Z] Copying: 606/1024 [MB] (15 MBps) [2024-11-25T20:48:51.556Z] Copying: 621/1024 [MB] (15 MBps) [2024-11-25T20:48:52.498Z] Copying: 637/1024 [MB] (15 MBps) [2024-11-25T20:48:53.435Z] Copying: 652/1024 [MB] (15 MBps) [2024-11-25T20:48:54.374Z] Copying: 667/1024 [MB] (15 MBps) [2024-11-25T20:48:55.753Z] Copying: 682/1024 [MB] (15 MBps) [2024-11-25T20:48:56.695Z] Copying: 698/1024 [MB] (15 MBps) [2024-11-25T20:48:57.640Z] Copying: 713/1024 [MB] (15 MBps) [2024-11-25T20:48:58.579Z] Copying: 728/1024 [MB] (15 MBps) [2024-11-25T20:48:59.515Z] Copying: 743/1024 [MB] (15 MBps) [2024-11-25T20:49:00.452Z] Copying: 758/1024 [MB] (15 MBps) [2024-11-25T20:49:01.388Z] Copying: 773/1024 [MB] (15 MBps) [2024-11-25T20:49:02.762Z] Copying: 788/1024 [MB] (14 MBps) [2024-11-25T20:49:03.370Z] Copying: 803/1024 [MB] (15 MBps) [2024-11-25T20:49:04.747Z] Copying: 818/1024 [MB] (15 MBps) [2024-11-25T20:49:05.684Z] Copying: 833/1024 [MB] (14 MBps) [2024-11-25T20:49:06.620Z] Copying: 848/1024 [MB] (15 MBps) [2024-11-25T20:49:07.557Z] Copying: 864/1024 [MB] (15 MBps) [2024-11-25T20:49:08.494Z] Copying: 879/1024 [MB] (15 MBps) [2024-11-25T20:49:09.431Z] Copying: 894/1024 [MB] (15 MBps) [2024-11-25T20:49:10.367Z] Copying: 910/1024 [MB] (15 MBps) [2024-11-25T20:49:11.744Z] Copying: 925/1024 [MB] (15 MBps) [2024-11-25T20:49:12.682Z] Copying: 941/1024 [MB] (15 MBps) [2024-11-25T20:49:13.635Z] Copying: 957/1024 [MB] (15 MBps) [2024-11-25T20:49:14.573Z] Copying: 972/1024 [MB] (15 MBps) [2024-11-25T20:49:15.547Z] Copying: 988/1024 [MB] (15 MBps) [2024-11-25T20:49:16.486Z] Copying: 1004/1024 [MB] (15 MBps) [2024-11-25T20:49:16.746Z] Copying: 1020/1024 [MB] (16 MBps) [2024-11-25T20:49:18.125Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:31:09.989 00:31:09.989 20:49:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:31:09.989 20:49:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:31:09.989 20:49:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:10.249 [2024-11-25 20:49:18.155786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.155860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:10.249 [2024-11-25 20:49:18.155880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:10.249 [2024-11-25 20:49:18.155896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.155930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:10.249 [2024-11-25 20:49:18.160128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.160171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:10.249 [2024-11-25 20:49:18.160192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.176 ms 00:31:10.249 [2024-11-25 20:49:18.160206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.162492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.162730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:10.249 [2024-11-25 20:49:18.162763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:31:10.249 [2024-11-25 20:49:18.162776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.180891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.180940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:10.249 [2024-11-25 20:49:18.180961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.105 ms 00:31:10.249 [2024-11-25 20:49:18.180973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.185959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.186001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:10.249 [2024-11-25 20:49:18.186031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.943 ms 00:31:10.249 [2024-11-25 20:49:18.186044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.221962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.222007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:10.249 [2024-11-25 20:49:18.222026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.877 ms 00:31:10.249 [2024-11-25 20:49:18.222039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.243488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.243706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:10.249 [2024-11-25 20:49:18.243742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.429 ms 00:31:10.249 [2024-11-25 20:49:18.243756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.243913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.243929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:10.249 [2024-11-25 20:49:18.243946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:10.249 [2024-11-25 20:49:18.243958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.279198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.279401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:10.249 [2024-11-25 20:49:18.279432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.271 ms 00:31:10.249 [2024-11-25 20:49:18.279445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.316664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.316711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:10.249 [2024-11-25 20:49:18.316730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.231 ms 00:31:10.249 [2024-11-25 20:49:18.316743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.249 [2024-11-25 20:49:18.352033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.249 [2024-11-25 20:49:18.352080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:10.249 [2024-11-25 20:49:18.352099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.293 ms 00:31:10.249 [2024-11-25 20:49:18.352112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.510 [2024-11-25 20:49:18.386251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.510 [2024-11-25 20:49:18.386296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:10.510 [2024-11-25 20:49:18.386315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.010 ms 00:31:10.510 [2024-11-25 20:49:18.386341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.510 [2024-11-25 20:49:18.386416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:10.510 [2024-11-25 20:49:18.386435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.386995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:10.510 [2024-11-25 20:49:18.387104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:10.511 [2024-11-25 20:49:18.387967] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:10.511 [2024-11-25 20:49:18.387982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95033dac-05e9-4434-8c8c-ed39204e32c8 00:31:10.511 [2024-11-25 20:49:18.387995] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:10.511 [2024-11-25 20:49:18.388013] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:10.511 [2024-11-25 20:49:18.388025] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:10.511 [2024-11-25 20:49:18.388052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:10.511 [2024-11-25 20:49:18.388064] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:10.511 [2024-11-25 20:49:18.388079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:10.511 [2024-11-25 20:49:18.388092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:10.511 [2024-11-25 20:49:18.388106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:10.511 [2024-11-25 20:49:18.388117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:10.511 [2024-11-25 20:49:18.388133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.511 [2024-11-25 20:49:18.388145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:10.511 [2024-11-25 20:49:18.388161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:31:10.511 [2024-11-25 20:49:18.388173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.511 [2024-11-25 20:49:18.408033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.512 [2024-11-25 20:49:18.408078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:10.512 [2024-11-25 20:49:18.408096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.826 ms 00:31:10.512 [2024-11-25 20:49:18.408108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.512 [2024-11-25 20:49:18.408701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.512 [2024-11-25 20:49:18.408718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:10.512 [2024-11-25 20:49:18.408735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:31:10.512 [2024-11-25 20:49:18.408748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.512 [2024-11-25 20:49:18.470280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.512 [2024-11-25 20:49:18.470532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.512 [2024-11-25 20:49:18.470563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.512 [2024-11-25 20:49:18.470576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.512 [2024-11-25 20:49:18.470642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.512 [2024-11-25 20:49:18.470656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.512 [2024-11-25 20:49:18.470672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.512 [2024-11-25 20:49:18.470685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.512 [2024-11-25 20:49:18.470783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.512 [2024-11-25 20:49:18.470802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.512 [2024-11-25 20:49:18.470818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.512 [2024-11-25 20:49:18.470831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.512 [2024-11-25 20:49:18.470870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.512 [2024-11-25 20:49:18.470884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.512 [2024-11-25 20:49:18.470905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.512 [2024-11-25 20:49:18.470918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.512 [2024-11-25 20:49:18.588980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.512 [2024-11-25 20:49:18.589041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.512 [2024-11-25 20:49:18.589061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.512 [2024-11-25 20:49:18.589073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.687560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.687622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.772 [2024-11-25 20:49:18.687643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.687656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.687792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.687809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.772 [2024-11-25 20:49:18.687829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.687858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.687921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.687936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.772 [2024-11-25 20:49:18.687952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.687965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.688089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.688116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.772 [2024-11-25 20:49:18.688133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.688148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.688200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.688216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:10.772 [2024-11-25 20:49:18.688232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.688244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.688295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.688310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.772 [2024-11-25 20:49:18.688350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.688367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.688428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.772 [2024-11-25 20:49:18.688444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.772 [2024-11-25 20:49:18.688460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.772 [2024-11-25 20:49:18.688473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.772 [2024-11-25 20:49:18.688623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.664 ms, result 0 00:31:10.772 true 00:31:10.772 20:49:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81513 00:31:10.772 20:49:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81513 00:31:10.772 20:49:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:31:10.772 [2024-11-25 20:49:18.826549] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:31:10.772 [2024-11-25 20:49:18.826695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82439 ] 00:31:11.032 [2024-11-25 20:49:19.010662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.032 [2024-11-25 20:49:19.115112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.501  [2024-11-25T20:49:21.574Z] Copying: 193/1024 [MB] (193 MBps) [2024-11-25T20:49:22.513Z] Copying: 392/1024 [MB] (199 MBps) [2024-11-25T20:49:23.451Z] Copying: 593/1024 [MB] (200 MBps) [2024-11-25T20:49:24.831Z] Copying: 788/1024 [MB] (195 MBps) [2024-11-25T20:49:24.831Z] Copying: 980/1024 [MB] (191 MBps) [2024-11-25T20:49:25.769Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:31:17.633 00:31:17.633 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81513 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:31:17.633 20:49:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:17.893 [2024-11-25 20:49:25.827663] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:31:17.893 [2024-11-25 20:49:25.827791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82514 ] 00:31:17.893 [2024-11-25 20:49:26.011608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.152 [2024-11-25 20:49:26.114307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.410 [2024-11-25 20:49:26.473292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.410 [2024-11-25 20:49:26.473387] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:18.410 [2024-11-25 20:49:26.539971] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:18.410 [2024-11-25 20:49:26.540296] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:18.410 [2024-11-25 20:49:26.540498] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:18.978 [2024-11-25 20:49:26.863229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.863521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:18.978 [2024-11-25 20:49:26.863550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:18.978 [2024-11-25 20:49:26.863571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.863637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.863651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.978 [2024-11-25 20:49:26.863664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:18.978 [2024-11-25 20:49:26.863675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.863701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:18.978 [2024-11-25 20:49:26.864642] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:18.978 [2024-11-25 20:49:26.864670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.864682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.978 [2024-11-25 20:49:26.864696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:31:18.978 [2024-11-25 20:49:26.864718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.866179] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:18.978 [2024-11-25 20:49:26.883836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.883882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:18.978 [2024-11-25 20:49:26.883898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.686 ms 00:31:18.978 [2024-11-25 20:49:26.883910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.883976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.883990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:18.978 [2024-11-25 20:49:26.884003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:18.978 [2024-11-25 20:49:26.884014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.890878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.891119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.978 [2024-11-25 20:49:26.891142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.796 ms 00:31:18.978 [2024-11-25 20:49:26.891155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.891246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.891261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.978 [2024-11-25 20:49:26.891274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:18.978 [2024-11-25 20:49:26.891285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.891359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.891390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:18.978 [2024-11-25 20:49:26.891413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:18.978 [2024-11-25 20:49:26.891426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.891454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:18.978 [2024-11-25 20:49:26.896106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.896144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.978 [2024-11-25 20:49:26.896158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:31:18.978 [2024-11-25 20:49:26.896170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.896202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.978 [2024-11-25 20:49:26.896214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:18.978 [2024-11-25 20:49:26.896226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:18.978 [2024-11-25 20:49:26.896237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.978 [2024-11-25 20:49:26.896299] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:18.978 [2024-11-25 20:49:26.896343] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:18.978 [2024-11-25 20:49:26.896380] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:18.978 [2024-11-25 20:49:26.896399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:18.978 [2024-11-25 20:49:26.896487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:18.978 [2024-11-25 20:49:26.896503] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:18.978 [2024-11-25 20:49:26.896518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:18.978 [2024-11-25 20:49:26.896538] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:18.978 [2024-11-25 20:49:26.896552] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:18.978 [2024-11-25 20:49:26.896566] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:18.978 [2024-11-25 20:49:26.896577] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:18.979 [2024-11-25 20:49:26.896588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:18.979 [2024-11-25 20:49:26.896601] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:18.979 [2024-11-25 20:49:26.896616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.979 [2024-11-25 20:49:26.896627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:18.979 [2024-11-25 20:49:26.896639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:31:18.979 [2024-11-25 20:49:26.896650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.979 [2024-11-25 20:49:26.896723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.979 [2024-11-25 20:49:26.896741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:18.979 [2024-11-25 20:49:26.896753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:18.979 [2024-11-25 20:49:26.896764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.979 [2024-11-25 20:49:26.896859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:18.979 [2024-11-25 20:49:26.896876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:18.979 [2024-11-25 20:49:26.896888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.979 [2024-11-25 20:49:26.896899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.896912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:18.979 [2024-11-25 20:49:26.896923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.896934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:18.979 [2024-11-25 20:49:26.896945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:18.979 [2024-11-25 20:49:26.896957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:18.979 [2024-11-25 20:49:26.896980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.979 [2024-11-25 20:49:26.896992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:18.979 [2024-11-25 20:49:26.897003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:18.979 [2024-11-25 20:49:26.897013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:18.979 [2024-11-25 20:49:26.897025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:18.979 [2024-11-25 20:49:26.897036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:18.979 [2024-11-25 20:49:26.897046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:18.979 [2024-11-25 20:49:26.897069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:18.979 [2024-11-25 20:49:26.897100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:18.979 [2024-11-25 20:49:26.897131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:18.979 [2024-11-25 20:49:26.897162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:18.979 [2024-11-25 20:49:26.897192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:18.979 [2024-11-25 20:49:26.897223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.979 [2024-11-25 20:49:26.897244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:18.979 [2024-11-25 20:49:26.897255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:18.979 [2024-11-25 20:49:26.897265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:18.979 [2024-11-25 20:49:26.897274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:18.979 [2024-11-25 20:49:26.897285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:18.979 [2024-11-25 20:49:26.897295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:18.979 [2024-11-25 20:49:26.897315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:18.979 [2024-11-25 20:49:26.897342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897354] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:18.979 [2024-11-25 20:49:26.897365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:18.979 [2024-11-25 20:49:26.897381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:18.979 [2024-11-25 20:49:26.897403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:18.979 [2024-11-25 20:49:26.897414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:18.979 [2024-11-25 20:49:26.897425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:18.979 [2024-11-25 20:49:26.897437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:18.979 [2024-11-25 20:49:26.897447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:18.979 [2024-11-25 20:49:26.897458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:18.979 [2024-11-25 20:49:26.897470] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:18.979 [2024-11-25 20:49:26.897484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:18.979 [2024-11-25 20:49:26.897509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:18.979 [2024-11-25 20:49:26.897520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:18.979 [2024-11-25 20:49:26.897532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:18.979 [2024-11-25 20:49:26.897544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:18.979 [2024-11-25 20:49:26.897555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:18.979 [2024-11-25 20:49:26.897566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:18.979 [2024-11-25 20:49:26.897576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:18.979 [2024-11-25 20:49:26.897587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:18.979 [2024-11-25 20:49:26.897598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:18.979 [2024-11-25 20:49:26.897653] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:18.979 [2024-11-25 20:49:26.897666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:18.979 [2024-11-25 20:49:26.897690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:18.979 [2024-11-25 20:49:26.897703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:18.979 [2024-11-25 20:49:26.897715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:18.979 [2024-11-25 20:49:26.897727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.979 [2024-11-25 20:49:26.897738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:18.979 [2024-11-25 20:49:26.897749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:31:18.979 [2024-11-25 20:49:26.897770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.979 [2024-11-25 20:49:26.935090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.979 [2024-11-25 20:49:26.935128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.979 [2024-11-25 20:49:26.935143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.327 ms 00:31:18.979 [2024-11-25 20:49:26.935155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.979 [2024-11-25 20:49:26.935236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.979 [2024-11-25 20:49:26.935249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:18.979 [2024-11-25 20:49:26.935261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:18.979 [2024-11-25 20:49:26.935272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.009531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.009573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.980 [2024-11-25 20:49:27.009594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.304 ms 00:31:18.980 [2024-11-25 20:49:27.009607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.009649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.009662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.980 [2024-11-25 20:49:27.009676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:18.980 [2024-11-25 20:49:27.009688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.010226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.010244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.980 [2024-11-25 20:49:27.010259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:31:18.980 [2024-11-25 20:49:27.010278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.010434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.010451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.980 [2024-11-25 20:49:27.010465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:31:18.980 [2024-11-25 20:49:27.010476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.030460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.030499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.980 [2024-11-25 20:49:27.030514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.991 ms 00:31:18.980 [2024-11-25 20:49:27.030527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.048530] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:18.980 [2024-11-25 20:49:27.048755] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:18.980 [2024-11-25 20:49:27.048777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.048790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:18.980 [2024-11-25 20:49:27.048804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.174 ms 00:31:18.980 [2024-11-25 20:49:27.048815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.078489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.078658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:18.980 [2024-11-25 20:49:27.078683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.637 ms 00:31:18.980 [2024-11-25 20:49:27.078696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.980 [2024-11-25 20:49:27.096988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.980 [2024-11-25 20:49:27.097031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:18.980 [2024-11-25 20:49:27.097047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.273 ms 00:31:18.980 [2024-11-25 20:49:27.097058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.113889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.113934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.239 [2024-11-25 20:49:27.113950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:31:19.239 [2024-11-25 20:49:27.113961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.114748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.114789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.239 [2024-11-25 20:49:27.114804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:31:19.239 [2024-11-25 20:49:27.114816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.212989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.213256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:19.239 [2024-11-25 20:49:27.213286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.303 ms 00:31:19.239 [2024-11-25 20:49:27.213301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.226065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:19.239 [2024-11-25 20:49:27.231239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.231274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.239 [2024-11-25 20:49:27.231293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.710 ms 00:31:19.239 [2024-11-25 20:49:27.231314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.231441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.231459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:19.239 [2024-11-25 20:49:27.231474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:19.239 [2024-11-25 20:49:27.231488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.231603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.231620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.239 [2024-11-25 20:49:27.231635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:19.239 [2024-11-25 20:49:27.231651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.231688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.231700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.239 [2024-11-25 20:49:27.231712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:19.239 [2024-11-25 20:49:27.231728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.231776] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:19.239 [2024-11-25 20:49:27.231792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.231809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:19.239 [2024-11-25 20:49:27.231826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:19.239 [2024-11-25 20:49:27.231845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.272638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.272899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.239 [2024-11-25 20:49:27.273075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.832 ms 00:31:19.239 [2024-11-25 20:49:27.273144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.273437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.239 [2024-11-25 20:49:27.273598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.239 [2024-11-25 20:49:27.273718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:19.239 [2024-11-25 20:49:27.273796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.239 [2024-11-25 20:49:27.276131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.705 ms, result 0 00:31:20.172  [2024-11-25T20:49:29.681Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-25T20:49:30.616Z] Copying: 47/1024 [MB] (24 MBps) [2024-11-25T20:49:31.553Z] Copying: 72/1024 [MB] (24 MBps) [2024-11-25T20:49:32.488Z] Copying: 97/1024 [MB] (24 MBps) [2024-11-25T20:49:33.423Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-25T20:49:34.360Z] Copying: 145/1024 [MB] (24 MBps) [2024-11-25T20:49:35.296Z] Copying: 170/1024 [MB] (24 MBps) [2024-11-25T20:49:36.673Z] Copying: 195/1024 [MB] (24 MBps) [2024-11-25T20:49:37.610Z] Copying: 219/1024 [MB] (24 MBps) [2024-11-25T20:49:38.545Z] Copying: 243/1024 [MB] (24 MBps) [2024-11-25T20:49:39.481Z] Copying: 267/1024 [MB] (24 MBps) [2024-11-25T20:49:40.418Z] Copying: 292/1024 [MB] (24 MBps) [2024-11-25T20:49:41.355Z] Copying: 316/1024 [MB] (24 MBps) [2024-11-25T20:49:42.292Z] Copying: 340/1024 [MB] (24 MBps) [2024-11-25T20:49:43.672Z] Copying: 365/1024 [MB] (24 MBps) [2024-11-25T20:49:44.609Z] Copying: 389/1024 [MB] (24 MBps) [2024-11-25T20:49:45.545Z] Copying: 414/1024 [MB] (24 MBps) [2024-11-25T20:49:46.483Z] Copying: 440/1024 [MB] (25 MBps) [2024-11-25T20:49:47.442Z] Copying: 466/1024 [MB] (25 MBps) [2024-11-25T20:49:48.401Z] Copying: 492/1024 [MB] (25 MBps) [2024-11-25T20:49:49.338Z] Copying: 517/1024 [MB] (25 MBps) [2024-11-25T20:49:50.273Z] Copying: 542/1024 [MB] (25 MBps) [2024-11-25T20:49:51.652Z] Copying: 568/1024 [MB] (25 MBps) [2024-11-25T20:49:52.590Z] Copying: 593/1024 [MB] (25 MBps) [2024-11-25T20:49:53.525Z] Copying: 618/1024 [MB] (24 MBps) [2024-11-25T20:49:54.463Z] Copying: 642/1024 [MB] (24 MBps) [2024-11-25T20:49:55.400Z] Copying: 667/1024 [MB] (25 MBps) [2024-11-25T20:49:56.340Z] Copying: 692/1024 [MB] (25 MBps) [2024-11-25T20:49:57.277Z] Copying: 717/1024 [MB] (24 MBps) [2024-11-25T20:49:58.653Z] Copying: 742/1024 [MB] (24 MBps) [2024-11-25T20:49:59.590Z] Copying: 767/1024 [MB] (25 MBps) [2024-11-25T20:50:00.526Z] Copying: 792/1024 [MB] (24 MBps) [2024-11-25T20:50:01.461Z] Copying: 817/1024 [MB] (24 MBps) [2024-11-25T20:50:02.395Z] Copying: 841/1024 [MB] (24 MBps) [2024-11-25T20:50:03.351Z] Copying: 866/1024 [MB] (24 MBps) [2024-11-25T20:50:04.330Z] Copying: 891/1024 [MB] (24 MBps) [2024-11-25T20:50:05.267Z] Copying: 916/1024 [MB] (25 MBps) [2024-11-25T20:50:06.646Z] Copying: 944/1024 [MB] (27 MBps) [2024-11-25T20:50:07.583Z] Copying: 971/1024 [MB] (26 MBps) [2024-11-25T20:50:08.542Z] Copying: 996/1024 [MB] (25 MBps) [2024-11-25T20:50:09.113Z] Copying: 1023/1024 [MB] (26 MBps) [2024-11-25T20:50:09.113Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-25 20:50:08.955959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.977 [2024-11-25 20:50:08.956017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:00.977 [2024-11-25 20:50:08.956036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:00.977 [2024-11-25 20:50:08.956061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.977 [2024-11-25 20:50:08.957725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:00.977 [2024-11-25 20:50:08.963315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.977 [2024-11-25 20:50:08.963354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:00.977 [2024-11-25 20:50:08.963369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.566 ms 00:32:00.977 [2024-11-25 20:50:08.963389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.977 [2024-11-25 20:50:08.973843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.977 [2024-11-25 20:50:08.973881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:00.978 [2024-11-25 20:50:08.973896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.800 ms 00:32:00.978 [2024-11-25 20:50:08.973908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.978 [2024-11-25 20:50:08.997602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.978 [2024-11-25 20:50:08.997642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:00.978 [2024-11-25 20:50:08.997657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.713 ms 00:32:00.978 [2024-11-25 20:50:08.997670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.978 [2024-11-25 20:50:09.002638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.978 [2024-11-25 20:50:09.002678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:00.978 [2024-11-25 20:50:09.002691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.943 ms 00:32:00.978 [2024-11-25 20:50:09.002702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.978 [2024-11-25 20:50:09.038195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.978 [2024-11-25 20:50:09.038339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:00.978 [2024-11-25 20:50:09.038375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.506 ms 00:32:00.978 [2024-11-25 20:50:09.038387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.978 [2024-11-25 20:50:09.058996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.978 [2024-11-25 20:50:09.059032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:00.978 [2024-11-25 20:50:09.059046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.606 ms 00:32:00.978 [2024-11-25 20:50:09.059072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.237 [2024-11-25 20:50:09.181340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.237 [2024-11-25 20:50:09.181391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:01.237 [2024-11-25 20:50:09.181412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.422 ms 00:32:01.237 [2024-11-25 20:50:09.181423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.237 [2024-11-25 20:50:09.217197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.237 [2024-11-25 20:50:09.217339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:01.237 [2024-11-25 20:50:09.217376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.814 ms 00:32:01.237 [2024-11-25 20:50:09.217401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.237 [2024-11-25 20:50:09.252881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.237 [2024-11-25 20:50:09.252930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:01.237 [2024-11-25 20:50:09.252943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.502 ms 00:32:01.237 [2024-11-25 20:50:09.252953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.237 [2024-11-25 20:50:09.286442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.237 [2024-11-25 20:50:09.286477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:01.237 [2024-11-25 20:50:09.286491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.507 ms 00:32:01.237 [2024-11-25 20:50:09.286501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.237 [2024-11-25 20:50:09.319961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.237 [2024-11-25 20:50:09.319994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:01.238 [2024-11-25 20:50:09.320006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.424 ms 00:32:01.238 [2024-11-25 20:50:09.320032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.238 [2024-11-25 20:50:09.320067] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:01.238 [2024-11-25 20:50:09.320086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109312 / 261120 wr_cnt: 1 state: open 00:32:01.238 [2024-11-25 20:50:09.320099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:01.238 [2024-11-25 20:50:09.320879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.320990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:01.239 [2024-11-25 20:50:09.321240] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:01.239 [2024-11-25 20:50:09.321250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95033dac-05e9-4434-8c8c-ed39204e32c8 00:32:01.239 [2024-11-25 20:50:09.321279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109312 00:32:01.239 [2024-11-25 20:50:09.321289] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110272 00:32:01.239 [2024-11-25 20:50:09.321300] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109312 00:32:01.239 [2024-11-25 20:50:09.321311] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:32:01.239 [2024-11-25 20:50:09.321322] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:01.239 [2024-11-25 20:50:09.321334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:01.239 [2024-11-25 20:50:09.321354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:01.239 [2024-11-25 20:50:09.321364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:01.239 [2024-11-25 20:50:09.321373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:01.239 [2024-11-25 20:50:09.321383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.239 [2024-11-25 20:50:09.321394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:01.239 [2024-11-25 20:50:09.321405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:32:01.239 [2024-11-25 20:50:09.321415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.239 [2024-11-25 20:50:09.341317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.239 [2024-11-25 20:50:09.341358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:01.239 [2024-11-25 20:50:09.341370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.899 ms 00:32:01.239 [2024-11-25 20:50:09.341381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.239 [2024-11-25 20:50:09.342051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.239 [2024-11-25 20:50:09.342080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:01.239 [2024-11-25 20:50:09.342092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:32:01.239 [2024-11-25 20:50:09.342106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.395872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.396040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.499 [2024-11-25 20:50:09.396061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.396075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.396138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.396150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.499 [2024-11-25 20:50:09.396161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.396179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.396262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.396278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.499 [2024-11-25 20:50:09.396289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.396300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.396318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.396345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.499 [2024-11-25 20:50:09.396357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.396369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.527220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.527500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:01.499 [2024-11-25 20:50:09.527536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.527550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.629833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.629890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:01.499 [2024-11-25 20:50:09.629907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.629926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.630031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.630043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:01.499 [2024-11-25 20:50:09.630055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.630066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.630124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.630138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:01.499 [2024-11-25 20:50:09.630149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.630161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.630291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.630305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:01.499 [2024-11-25 20:50:09.630317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.630349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.630391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.630404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:01.499 [2024-11-25 20:50:09.630415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.630426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.630476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.630488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:01.499 [2024-11-25 20:50:09.630499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.630510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.499 [2024-11-25 20:50:09.630562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.499 [2024-11-25 20:50:09.630585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:01.499 [2024-11-25 20:50:09.630596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.499 [2024-11-25 20:50:09.630607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.759 [2024-11-25 20:50:09.630973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 678.247 ms, result 0 00:32:03.138 00:32:03.138 00:32:03.138 20:50:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:05.042 20:50:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:05.042 [2024-11-25 20:50:12.996635] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:32:05.042 [2024-11-25 20:50:12.996767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82978 ] 00:32:05.301 [2024-11-25 20:50:13.179886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.301 [2024-11-25 20:50:13.315078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.871 [2024-11-25 20:50:13.712160] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.871 [2024-11-25 20:50:13.712531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.871 [2024-11-25 20:50:13.879701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.879759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:05.871 [2024-11-25 20:50:13.879778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:05.871 [2024-11-25 20:50:13.879804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.879856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.879872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:05.871 [2024-11-25 20:50:13.879884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:05.871 [2024-11-25 20:50:13.879894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.879917] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:05.871 [2024-11-25 20:50:13.880904] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:05.871 [2024-11-25 20:50:13.880928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.880939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:05.871 [2024-11-25 20:50:13.880951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:32:05.871 [2024-11-25 20:50:13.880961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.883485] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:05.871 [2024-11-25 20:50:13.902978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.903017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:05.871 [2024-11-25 20:50:13.903033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.524 ms 00:32:05.871 [2024-11-25 20:50:13.903044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.903115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.903128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:05.871 [2024-11-25 20:50:13.903140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:05.871 [2024-11-25 20:50:13.903152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.915782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.915812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:05.871 [2024-11-25 20:50:13.915825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.574 ms 00:32:05.871 [2024-11-25 20:50:13.915857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.915945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.915960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:05.871 [2024-11-25 20:50:13.915972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:05.871 [2024-11-25 20:50:13.915983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.916041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.916054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:05.871 [2024-11-25 20:50:13.916066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:05.871 [2024-11-25 20:50:13.916076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.916108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:05.871 [2024-11-25 20:50:13.921896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.921929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:05.871 [2024-11-25 20:50:13.921946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.806 ms 00:32:05.871 [2024-11-25 20:50:13.921957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.921989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.871 [2024-11-25 20:50:13.922000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:05.871 [2024-11-25 20:50:13.922011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:05.871 [2024-11-25 20:50:13.922022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.871 [2024-11-25 20:50:13.922061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:05.871 [2024-11-25 20:50:13.922087] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:05.871 [2024-11-25 20:50:13.922127] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:05.871 [2024-11-25 20:50:13.922150] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:05.871 [2024-11-25 20:50:13.922245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:05.871 [2024-11-25 20:50:13.922259] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:05.871 [2024-11-25 20:50:13.922274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:05.872 [2024-11-25 20:50:13.922287] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922312] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:05.872 [2024-11-25 20:50:13.922338] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:05.872 [2024-11-25 20:50:13.922350] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:05.872 [2024-11-25 20:50:13.922365] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:05.872 [2024-11-25 20:50:13.922377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.872 [2024-11-25 20:50:13.922388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:05.872 [2024-11-25 20:50:13.922399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:32:05.872 [2024-11-25 20:50:13.922410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.872 [2024-11-25 20:50:13.922483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.872 [2024-11-25 20:50:13.922494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:05.872 [2024-11-25 20:50:13.922505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:05.872 [2024-11-25 20:50:13.922515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.872 [2024-11-25 20:50:13.922619] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:05.872 [2024-11-25 20:50:13.922635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:05.872 [2024-11-25 20:50:13.922656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:05.872 [2024-11-25 20:50:13.922687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:05.872 [2024-11-25 20:50:13.922718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.872 [2024-11-25 20:50:13.922737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:05.872 [2024-11-25 20:50:13.922753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:05.872 [2024-11-25 20:50:13.922763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.872 [2024-11-25 20:50:13.922783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:05.872 [2024-11-25 20:50:13.922793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:05.872 [2024-11-25 20:50:13.922803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:05.872 [2024-11-25 20:50:13.922822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:05.872 [2024-11-25 20:50:13.922852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:05.872 [2024-11-25 20:50:13.922881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:05.872 [2024-11-25 20:50:13.922910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:05.872 [2024-11-25 20:50:13.922937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.872 [2024-11-25 20:50:13.922956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:05.872 [2024-11-25 20:50:13.922965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:05.872 [2024-11-25 20:50:13.922974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.872 [2024-11-25 20:50:13.922983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:05.872 [2024-11-25 20:50:13.922992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:05.872 [2024-11-25 20:50:13.923001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.872 [2024-11-25 20:50:13.923010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:05.872 [2024-11-25 20:50:13.923019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:05.872 [2024-11-25 20:50:13.923028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.923037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:05.872 [2024-11-25 20:50:13.923046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:05.872 [2024-11-25 20:50:13.923056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.923068] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:05.872 [2024-11-25 20:50:13.923079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:05.872 [2024-11-25 20:50:13.923088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.872 [2024-11-25 20:50:13.923099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.872 [2024-11-25 20:50:13.923120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:05.872 [2024-11-25 20:50:13.923129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:05.872 [2024-11-25 20:50:13.923138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:05.872 [2024-11-25 20:50:13.923148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:05.872 [2024-11-25 20:50:13.923157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:05.872 [2024-11-25 20:50:13.923166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:05.872 [2024-11-25 20:50:13.923177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:05.872 [2024-11-25 20:50:13.923190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:05.872 [2024-11-25 20:50:13.923217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:05.872 [2024-11-25 20:50:13.923228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:05.872 [2024-11-25 20:50:13.923239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:05.872 [2024-11-25 20:50:13.923249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:05.872 [2024-11-25 20:50:13.923259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:05.872 [2024-11-25 20:50:13.923269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:05.872 [2024-11-25 20:50:13.923280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:05.872 [2024-11-25 20:50:13.923290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:05.872 [2024-11-25 20:50:13.923300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:05.872 [2024-11-25 20:50:13.923369] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:05.872 [2024-11-25 20:50:13.923381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:05.872 [2024-11-25 20:50:13.923403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:05.872 [2024-11-25 20:50:13.923414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:05.872 [2024-11-25 20:50:13.923425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:05.872 [2024-11-25 20:50:13.923439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.872 [2024-11-25 20:50:13.923450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:05.872 [2024-11-25 20:50:13.923460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:32:05.872 [2024-11-25 20:50:13.923471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.872 [2024-11-25 20:50:13.973774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.872 [2024-11-25 20:50:13.973821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:05.872 [2024-11-25 20:50:13.973836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.317 ms 00:32:05.872 [2024-11-25 20:50:13.973852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.872 [2024-11-25 20:50:13.973946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.873 [2024-11-25 20:50:13.973958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:05.873 [2024-11-25 20:50:13.973969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:05.873 [2024-11-25 20:50:13.973980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.132 [2024-11-25 20:50:14.037524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.132 [2024-11-25 20:50:14.037734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:06.132 [2024-11-25 20:50:14.037757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.554 ms 00:32:06.132 [2024-11-25 20:50:14.037769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.132 [2024-11-25 20:50:14.037825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.132 [2024-11-25 20:50:14.037845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:06.132 [2024-11-25 20:50:14.037857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:06.132 [2024-11-25 20:50:14.037868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.132 [2024-11-25 20:50:14.038721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.132 [2024-11-25 20:50:14.038742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:06.132 [2024-11-25 20:50:14.038754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:32:06.132 [2024-11-25 20:50:14.038766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.132 [2024-11-25 20:50:14.038903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.132 [2024-11-25 20:50:14.038917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:06.132 [2024-11-25 20:50:14.038936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:32:06.132 [2024-11-25 20:50:14.038946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.132 [2024-11-25 20:50:14.061688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.132 [2024-11-25 20:50:14.061722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:06.133 [2024-11-25 20:50:14.061740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.755 ms 00:32:06.133 [2024-11-25 20:50:14.061751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.080976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:06.133 [2024-11-25 20:50:14.081013] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:06.133 [2024-11-25 20:50:14.081028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.081040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:06.133 [2024-11-25 20:50:14.081051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.167 ms 00:32:06.133 [2024-11-25 20:50:14.081061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.109149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.109185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:06.133 [2024-11-25 20:50:14.109199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.094 ms 00:32:06.133 [2024-11-25 20:50:14.109210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.126515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.126549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:06.133 [2024-11-25 20:50:14.126562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:32:06.133 [2024-11-25 20:50:14.126572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.143964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.144014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:06.133 [2024-11-25 20:50:14.144027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.385 ms 00:32:06.133 [2024-11-25 20:50:14.144036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.144906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.144937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:06.133 [2024-11-25 20:50:14.144956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:32:06.133 [2024-11-25 20:50:14.144966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.240616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.240675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:06.133 [2024-11-25 20:50:14.240699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.780 ms 00:32:06.133 [2024-11-25 20:50:14.240711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.251575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:06.133 [2024-11-25 20:50:14.255542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.255572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:06.133 [2024-11-25 20:50:14.255587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.805 ms 00:32:06.133 [2024-11-25 20:50:14.255598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.255707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.255721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:06.133 [2024-11-25 20:50:14.255734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:06.133 [2024-11-25 20:50:14.255749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.258086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.258124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:06.133 [2024-11-25 20:50:14.258137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:32:06.133 [2024-11-25 20:50:14.258147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.258179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.258191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:06.133 [2024-11-25 20:50:14.258203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:06.133 [2024-11-25 20:50:14.258213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.133 [2024-11-25 20:50:14.258261] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:06.133 [2024-11-25 20:50:14.258274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.133 [2024-11-25 20:50:14.258285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:06.133 [2024-11-25 20:50:14.258296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:06.133 [2024-11-25 20:50:14.258307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.392 [2024-11-25 20:50:14.295611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.392 [2024-11-25 20:50:14.295648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:06.392 [2024-11-25 20:50:14.295662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.333 ms 00:32:06.392 [2024-11-25 20:50:14.295679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.392 [2024-11-25 20:50:14.295757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.392 [2024-11-25 20:50:14.295770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:06.392 [2024-11-25 20:50:14.295781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:06.392 [2024-11-25 20:50:14.295791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.392 [2024-11-25 20:50:14.297211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.663 ms, result 0 00:32:07.773  [2024-11-25T20:50:16.851Z] Copying: 1484/1048576 [kB] (1484 kBps) [2024-11-25T20:50:17.787Z] Copying: 10280/1048576 [kB] (8796 kBps) [2024-11-25T20:50:18.721Z] Copying: 42/1024 [MB] (32 MBps) [2024-11-25T20:50:19.656Z] Copying: 75/1024 [MB] (33 MBps) [2024-11-25T20:50:20.591Z] Copying: 109/1024 [MB] (33 MBps) [2024-11-25T20:50:21.526Z] Copying: 143/1024 [MB] (34 MBps) [2024-11-25T20:50:22.902Z] Copying: 176/1024 [MB] (33 MBps) [2024-11-25T20:50:23.838Z] Copying: 210/1024 [MB] (33 MBps) [2024-11-25T20:50:24.772Z] Copying: 244/1024 [MB] (34 MBps) [2024-11-25T20:50:25.707Z] Copying: 278/1024 [MB] (34 MBps) [2024-11-25T20:50:26.643Z] Copying: 312/1024 [MB] (34 MBps) [2024-11-25T20:50:27.576Z] Copying: 346/1024 [MB] (34 MBps) [2024-11-25T20:50:28.512Z] Copying: 380/1024 [MB] (34 MBps) [2024-11-25T20:50:29.886Z] Copying: 414/1024 [MB] (33 MBps) [2024-11-25T20:50:30.821Z] Copying: 448/1024 [MB] (33 MBps) [2024-11-25T20:50:31.756Z] Copying: 482/1024 [MB] (33 MBps) [2024-11-25T20:50:32.692Z] Copying: 516/1024 [MB] (33 MBps) [2024-11-25T20:50:33.630Z] Copying: 548/1024 [MB] (32 MBps) [2024-11-25T20:50:34.568Z] Copying: 582/1024 [MB] (33 MBps) [2024-11-25T20:50:35.504Z] Copying: 616/1024 [MB] (33 MBps) [2024-11-25T20:50:36.916Z] Copying: 650/1024 [MB] (34 MBps) [2024-11-25T20:50:37.484Z] Copying: 684/1024 [MB] (33 MBps) [2024-11-25T20:50:38.860Z] Copying: 718/1024 [MB] (33 MBps) [2024-11-25T20:50:39.794Z] Copying: 751/1024 [MB] (32 MBps) [2024-11-25T20:50:40.761Z] Copying: 785/1024 [MB] (34 MBps) [2024-11-25T20:50:41.693Z] Copying: 820/1024 [MB] (34 MBps) [2024-11-25T20:50:42.625Z] Copying: 854/1024 [MB] (33 MBps) [2024-11-25T20:50:43.559Z] Copying: 887/1024 [MB] (33 MBps) [2024-11-25T20:50:44.494Z] Copying: 921/1024 [MB] (33 MBps) [2024-11-25T20:50:45.872Z] Copying: 955/1024 [MB] (34 MBps) [2024-11-25T20:50:46.809Z] Copying: 989/1024 [MB] (33 MBps) [2024-11-25T20:50:46.809Z] Copying: 1023/1024 [MB] (33 MBps) [2024-11-25T20:50:46.809Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-25 20:50:46.747645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.673 [2024-11-25 20:50:46.747804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:38.673 [2024-11-25 20:50:46.747840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:38.673 [2024-11-25 20:50:46.747864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.673 [2024-11-25 20:50:46.747913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:38.673 [2024-11-25 20:50:46.757309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.673 [2024-11-25 20:50:46.757368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:38.673 [2024-11-25 20:50:46.757387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.373 ms 00:32:38.673 [2024-11-25 20:50:46.757404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.673 [2024-11-25 20:50:46.757750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.673 [2024-11-25 20:50:46.757775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:38.673 [2024-11-25 20:50:46.757802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:32:38.673 [2024-11-25 20:50:46.757817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.673 [2024-11-25 20:50:46.770898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.673 [2024-11-25 20:50:46.771059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:38.673 [2024-11-25 20:50:46.771171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.075 ms 00:32:38.673 [2024-11-25 20:50:46.771211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.673 [2024-11-25 20:50:46.776287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.673 [2024-11-25 20:50:46.776431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:38.673 [2024-11-25 20:50:46.776576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:32:38.673 [2024-11-25 20:50:46.776613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.813129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.813255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:38.934 [2024-11-25 20:50:46.813412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.500 ms 00:32:38.934 [2024-11-25 20:50:46.813453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.833968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.834095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:38.934 [2024-11-25 20:50:46.834239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.491 ms 00:32:38.934 [2024-11-25 20:50:46.834257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.836170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.836210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:38.934 [2024-11-25 20:50:46.836223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:32:38.934 [2024-11-25 20:50:46.836234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.871461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.871588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:38.934 [2024-11-25 20:50:46.871608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.259 ms 00:32:38.934 [2024-11-25 20:50:46.871633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.905633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.905679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:38.934 [2024-11-25 20:50:46.905692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.993 ms 00:32:38.934 [2024-11-25 20:50:46.905702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.939942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.940061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:38.934 [2024-11-25 20:50:46.940144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.261 ms 00:32:38.934 [2024-11-25 20:50:46.940179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.974098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.934 [2024-11-25 20:50:46.974239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:38.934 [2024-11-25 20:50:46.974324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.863 ms 00:32:38.934 [2024-11-25 20:50:46.974371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.934 [2024-11-25 20:50:46.974425] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:38.934 [2024-11-25 20:50:46.974467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:38.934 [2024-11-25 20:50:46.974517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:38.934 [2024-11-25 20:50:46.974565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.974997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:38.934 [2024-11-25 20:50:46.975388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:38.935 [2024-11-25 20:50:46.975857] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:38.935 [2024-11-25 20:50:46.975868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95033dac-05e9-4434-8c8c-ed39204e32c8 00:32:38.935 [2024-11-25 20:50:46.975880] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:38.935 [2024-11-25 20:50:46.975890] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 155328 00:32:38.935 [2024-11-25 20:50:46.975904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 153344 00:32:38.935 [2024-11-25 20:50:46.975915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:32:38.935 [2024-11-25 20:50:46.975925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:38.935 [2024-11-25 20:50:46.975946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:38.935 [2024-11-25 20:50:46.975957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:38.935 [2024-11-25 20:50:46.975966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:38.935 [2024-11-25 20:50:46.975975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:38.935 [2024-11-25 20:50:46.975985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.935 [2024-11-25 20:50:46.975997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:38.935 [2024-11-25 20:50:46.976008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:32:38.935 [2024-11-25 20:50:46.976018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.935 [2024-11-25 20:50:46.996685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.935 [2024-11-25 20:50:46.996723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:38.935 [2024-11-25 20:50:46.996735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.660 ms 00:32:38.935 [2024-11-25 20:50:46.996745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.935 [2024-11-25 20:50:46.997296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.935 [2024-11-25 20:50:46.997310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:38.935 [2024-11-25 20:50:46.997321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:32:38.935 [2024-11-25 20:50:46.997344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.935 [2024-11-25 20:50:47.050362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.935 [2024-11-25 20:50:47.050397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:38.935 [2024-11-25 20:50:47.050410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.935 [2024-11-25 20:50:47.050438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.935 [2024-11-25 20:50:47.050501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.935 [2024-11-25 20:50:47.050513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:38.935 [2024-11-25 20:50:47.050524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.935 [2024-11-25 20:50:47.050535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.935 [2024-11-25 20:50:47.050610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.935 [2024-11-25 20:50:47.050623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:38.935 [2024-11-25 20:50:47.050635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.935 [2024-11-25 20:50:47.050645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.935 [2024-11-25 20:50:47.050663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.935 [2024-11-25 20:50:47.050675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:38.935 [2024-11-25 20:50:47.050685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.935 [2024-11-25 20:50:47.050695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.180823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.180892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.195 [2024-11-25 20:50:47.180908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.180919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.282711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.282772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:39.195 [2024-11-25 20:50:47.282788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.282800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.282910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.282927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:39.195 [2024-11-25 20:50:47.282938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.282949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.282995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.283007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:39.195 [2024-11-25 20:50:47.283018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.283028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.283153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.283167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:39.195 [2024-11-25 20:50:47.283182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.283192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.283229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.283242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:39.195 [2024-11-25 20:50:47.283253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.283263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.283309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.283321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:39.195 [2024-11-25 20:50:47.283352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.283379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.283431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.195 [2024-11-25 20:50:47.283444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:39.195 [2024-11-25 20:50:47.283454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.195 [2024-11-25 20:50:47.283465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.195 [2024-11-25 20:50:47.283632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.809 ms, result 0 00:32:40.574 00:32:40.574 00:32:40.574 20:50:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:42.478 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:42.478 20:50:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:42.478 [2024-11-25 20:50:50.189292] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:32:42.478 [2024-11-25 20:50:50.189438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83348 ] 00:32:42.478 [2024-11-25 20:50:50.370673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.478 [2024-11-25 20:50:50.509384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.049 [2024-11-25 20:50:50.917247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:43.049 [2024-11-25 20:50:50.917342] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:43.049 [2024-11-25 20:50:51.083704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.083763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:43.049 [2024-11-25 20:50:51.083780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:43.049 [2024-11-25 20:50:51.083792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.083846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.083862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:43.049 [2024-11-25 20:50:51.083873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:43.049 [2024-11-25 20:50:51.083884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.083906] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:43.049 [2024-11-25 20:50:51.084915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:43.049 [2024-11-25 20:50:51.085000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.085013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:43.049 [2024-11-25 20:50:51.085025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:32:43.049 [2024-11-25 20:50:51.085036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.087467] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:43.049 [2024-11-25 20:50:51.107545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.107687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:43.049 [2024-11-25 20:50:51.107709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.110 ms 00:32:43.049 [2024-11-25 20:50:51.107737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.107807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.107821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:43.049 [2024-11-25 20:50:51.107833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:43.049 [2024-11-25 20:50:51.107844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.120351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.120380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:43.049 [2024-11-25 20:50:51.120393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.451 ms 00:32:43.049 [2024-11-25 20:50:51.120424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.120516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.120530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:43.049 [2024-11-25 20:50:51.120542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:32:43.049 [2024-11-25 20:50:51.120553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.120610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.120622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:43.049 [2024-11-25 20:50:51.120633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:43.049 [2024-11-25 20:50:51.120643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.120675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:43.049 [2024-11-25 20:50:51.126516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.126550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:43.049 [2024-11-25 20:50:51.126567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.858 ms 00:32:43.049 [2024-11-25 20:50:51.126578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.126611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.126623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:43.049 [2024-11-25 20:50:51.126634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:43.049 [2024-11-25 20:50:51.126645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.126682] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:43.049 [2024-11-25 20:50:51.126707] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:43.049 [2024-11-25 20:50:51.126745] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:43.049 [2024-11-25 20:50:51.126768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:43.049 [2024-11-25 20:50:51.126862] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:43.049 [2024-11-25 20:50:51.126876] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:43.049 [2024-11-25 20:50:51.126890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:43.049 [2024-11-25 20:50:51.126904] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:43.049 [2024-11-25 20:50:51.126918] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:43.049 [2024-11-25 20:50:51.126941] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:43.049 [2024-11-25 20:50:51.126952] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:43.049 [2024-11-25 20:50:51.126962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:43.049 [2024-11-25 20:50:51.127111] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:43.049 [2024-11-25 20:50:51.127122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.127132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:43.049 [2024-11-25 20:50:51.127142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:32:43.049 [2024-11-25 20:50:51.127153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.127223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.049 [2024-11-25 20:50:51.127234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:43.049 [2024-11-25 20:50:51.127245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:43.049 [2024-11-25 20:50:51.127255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.049 [2024-11-25 20:50:51.127373] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:43.049 [2024-11-25 20:50:51.127390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:43.049 [2024-11-25 20:50:51.127401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:43.049 [2024-11-25 20:50:51.127433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:43.049 [2024-11-25 20:50:51.127462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:43.049 [2024-11-25 20:50:51.127481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:43.049 [2024-11-25 20:50:51.127491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:43.049 [2024-11-25 20:50:51.127519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:43.049 [2024-11-25 20:50:51.127540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:43.049 [2024-11-25 20:50:51.127550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:43.049 [2024-11-25 20:50:51.127560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:43.049 [2024-11-25 20:50:51.127580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:43.049 [2024-11-25 20:50:51.127616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:43.049 [2024-11-25 20:50:51.127645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:43.049 [2024-11-25 20:50:51.127673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:43.049 [2024-11-25 20:50:51.127702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:43.049 [2024-11-25 20:50:51.127721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:43.049 [2024-11-25 20:50:51.127730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:43.049 [2024-11-25 20:50:51.127740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:43.049 [2024-11-25 20:50:51.127749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:43.049 [2024-11-25 20:50:51.127759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:43.050 [2024-11-25 20:50:51.127768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:43.050 [2024-11-25 20:50:51.127777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:43.050 [2024-11-25 20:50:51.127786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:43.050 [2024-11-25 20:50:51.127795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:43.050 [2024-11-25 20:50:51.127804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:43.050 [2024-11-25 20:50:51.127814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:43.050 [2024-11-25 20:50:51.127823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:43.050 [2024-11-25 20:50:51.127832] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:43.050 [2024-11-25 20:50:51.127853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:43.050 [2024-11-25 20:50:51.127864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:43.050 [2024-11-25 20:50:51.127875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:43.050 [2024-11-25 20:50:51.127885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:43.050 [2024-11-25 20:50:51.127895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:43.050 [2024-11-25 20:50:51.127906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:43.050 [2024-11-25 20:50:51.127916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:43.050 [2024-11-25 20:50:51.127926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:43.050 [2024-11-25 20:50:51.127936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:43.050 [2024-11-25 20:50:51.127947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:43.050 [2024-11-25 20:50:51.127960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.127977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:43.050 [2024-11-25 20:50:51.127988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:43.050 [2024-11-25 20:50:51.127999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:43.050 [2024-11-25 20:50:51.128009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:43.050 [2024-11-25 20:50:51.128021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:43.050 [2024-11-25 20:50:51.128032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:43.050 [2024-11-25 20:50:51.128043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:43.050 [2024-11-25 20:50:51.128053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:43.050 [2024-11-25 20:50:51.128064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:43.050 [2024-11-25 20:50:51.128075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.128085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.128096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.128106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.128117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:43.050 [2024-11-25 20:50:51.128128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:43.050 [2024-11-25 20:50:51.128140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.128151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:43.050 [2024-11-25 20:50:51.128161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:43.050 [2024-11-25 20:50:51.128171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:43.050 [2024-11-25 20:50:51.128181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:43.050 [2024-11-25 20:50:51.128192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.050 [2024-11-25 20:50:51.128204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:43.050 [2024-11-25 20:50:51.128215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:32:43.050 [2024-11-25 20:50:51.128225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.050 [2024-11-25 20:50:51.178670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.050 [2024-11-25 20:50:51.178710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:43.050 [2024-11-25 20:50:51.178725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.474 ms 00:32:43.050 [2024-11-25 20:50:51.178741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.050 [2024-11-25 20:50:51.178829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.050 [2024-11-25 20:50:51.178841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:43.050 [2024-11-25 20:50:51.178853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:43.050 [2024-11-25 20:50:51.178863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.244752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.244794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:43.310 [2024-11-25 20:50:51.244809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.903 ms 00:32:43.310 [2024-11-25 20:50:51.244821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.244865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.244882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:43.310 [2024-11-25 20:50:51.244894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:43.310 [2024-11-25 20:50:51.244904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.245754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.245770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:43.310 [2024-11-25 20:50:51.245782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:32:43.310 [2024-11-25 20:50:51.245801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.245938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.245957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:43.310 [2024-11-25 20:50:51.245975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:32:43.310 [2024-11-25 20:50:51.245986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.269725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.269759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:43.310 [2024-11-25 20:50:51.269777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.755 ms 00:32:43.310 [2024-11-25 20:50:51.269811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.289183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:43.310 [2024-11-25 20:50:51.289223] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:43.310 [2024-11-25 20:50:51.289249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.289260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:43.310 [2024-11-25 20:50:51.289272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.340 ms 00:32:43.310 [2024-11-25 20:50:51.289282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.318084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.318121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:43.310 [2024-11-25 20:50:51.318135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.806 ms 00:32:43.310 [2024-11-25 20:50:51.318146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.335666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.335701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:43.310 [2024-11-25 20:50:51.335714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.490 ms 00:32:43.310 [2024-11-25 20:50:51.335724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.353019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.353146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:43.310 [2024-11-25 20:50:51.353166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:32:43.310 [2024-11-25 20:50:51.353192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.310 [2024-11-25 20:50:51.353962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.310 [2024-11-25 20:50:51.353986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:43.310 [2024-11-25 20:50:51.354002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:32:43.310 [2024-11-25 20:50:51.354013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.570 [2024-11-25 20:50:51.448738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.570 [2024-11-25 20:50:51.448800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:43.570 [2024-11-25 20:50:51.448826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.854 ms 00:32:43.570 [2024-11-25 20:50:51.448838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.570 [2024-11-25 20:50:51.460090] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:43.570 [2024-11-25 20:50:51.463916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.570 [2024-11-25 20:50:51.463948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:43.570 [2024-11-25 20:50:51.463962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.048 ms 00:32:43.570 [2024-11-25 20:50:51.463974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.570 [2024-11-25 20:50:51.464087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.570 [2024-11-25 20:50:51.464101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:43.570 [2024-11-25 20:50:51.464114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:43.571 [2024-11-25 20:50:51.464129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.571 [2024-11-25 20:50:51.465519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.571 [2024-11-25 20:50:51.465547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:43.571 [2024-11-25 20:50:51.465558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:32:43.571 [2024-11-25 20:50:51.465570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.571 [2024-11-25 20:50:51.465595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.571 [2024-11-25 20:50:51.465606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:43.571 [2024-11-25 20:50:51.465617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:43.571 [2024-11-25 20:50:51.465628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.571 [2024-11-25 20:50:51.465675] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:43.571 [2024-11-25 20:50:51.465688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.571 [2024-11-25 20:50:51.465700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:43.571 [2024-11-25 20:50:51.465710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:43.571 [2024-11-25 20:50:51.465721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.571 [2024-11-25 20:50:51.502302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.571 [2024-11-25 20:50:51.502477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:43.571 [2024-11-25 20:50:51.502500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.619 ms 00:32:43.571 [2024-11-25 20:50:51.502518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.571 [2024-11-25 20:50:51.502597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.571 [2024-11-25 20:50:51.502611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:43.571 [2024-11-25 20:50:51.502623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:43.571 [2024-11-25 20:50:51.502634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.571 [2024-11-25 20:50:51.504007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.469 ms, result 0 00:32:44.949  [2024-11-25T20:50:54.023Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-25T20:50:54.961Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-25T20:50:55.899Z] Copying: 82/1024 [MB] (27 MBps) [2024-11-25T20:50:56.836Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-25T20:50:57.771Z] Copying: 137/1024 [MB] (27 MBps) [2024-11-25T20:50:58.707Z] Copying: 165/1024 [MB] (27 MBps) [2024-11-25T20:51:00.086Z] Copying: 193/1024 [MB] (27 MBps) [2024-11-25T20:51:01.024Z] Copying: 220/1024 [MB] (27 MBps) [2024-11-25T20:51:01.993Z] Copying: 248/1024 [MB] (28 MBps) [2024-11-25T20:51:02.944Z] Copying: 276/1024 [MB] (27 MBps) [2024-11-25T20:51:03.880Z] Copying: 304/1024 [MB] (28 MBps) [2024-11-25T20:51:04.816Z] Copying: 332/1024 [MB] (27 MBps) [2024-11-25T20:51:05.753Z] Copying: 360/1024 [MB] (28 MBps) [2024-11-25T20:51:07.130Z] Copying: 388/1024 [MB] (28 MBps) [2024-11-25T20:51:07.698Z] Copying: 415/1024 [MB] (27 MBps) [2024-11-25T20:51:09.074Z] Copying: 442/1024 [MB] (26 MBps) [2024-11-25T20:51:10.009Z] Copying: 469/1024 [MB] (27 MBps) [2024-11-25T20:51:10.945Z] Copying: 497/1024 [MB] (27 MBps) [2024-11-25T20:51:11.881Z] Copying: 524/1024 [MB] (27 MBps) [2024-11-25T20:51:12.816Z] Copying: 551/1024 [MB] (26 MBps) [2024-11-25T20:51:13.751Z] Copying: 578/1024 [MB] (27 MBps) [2024-11-25T20:51:14.686Z] Copying: 606/1024 [MB] (27 MBps) [2024-11-25T20:51:16.062Z] Copying: 633/1024 [MB] (27 MBps) [2024-11-25T20:51:16.998Z] Copying: 660/1024 [MB] (26 MBps) [2024-11-25T20:51:17.935Z] Copying: 687/1024 [MB] (26 MBps) [2024-11-25T20:51:18.870Z] Copying: 713/1024 [MB] (26 MBps) [2024-11-25T20:51:19.805Z] Copying: 740/1024 [MB] (26 MBps) [2024-11-25T20:51:20.741Z] Copying: 768/1024 [MB] (27 MBps) [2024-11-25T20:51:21.678Z] Copying: 794/1024 [MB] (26 MBps) [2024-11-25T20:51:23.055Z] Copying: 821/1024 [MB] (26 MBps) [2024-11-25T20:51:23.990Z] Copying: 847/1024 [MB] (26 MBps) [2024-11-25T20:51:24.925Z] Copying: 874/1024 [MB] (27 MBps) [2024-11-25T20:51:25.861Z] Copying: 901/1024 [MB] (26 MBps) [2024-11-25T20:51:26.797Z] Copying: 928/1024 [MB] (27 MBps) [2024-11-25T20:51:27.764Z] Copying: 955/1024 [MB] (26 MBps) [2024-11-25T20:51:28.724Z] Copying: 982/1024 [MB] (26 MBps) [2024-11-25T20:51:29.292Z] Copying: 1008/1024 [MB] (26 MBps) [2024-11-25T20:51:29.292Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-25 20:51:29.247570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.156 [2024-11-25 20:51:29.247636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:21.156 [2024-11-25 20:51:29.247655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:21.156 [2024-11-25 20:51:29.247667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.156 [2024-11-25 20:51:29.247690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:21.156 [2024-11-25 20:51:29.252835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.156 [2024-11-25 20:51:29.252990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:21.156 [2024-11-25 20:51:29.253100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:33:21.156 [2024-11-25 20:51:29.253141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.156 [2024-11-25 20:51:29.253382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.156 [2024-11-25 20:51:29.253701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:21.156 [2024-11-25 20:51:29.253743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:33:21.156 [2024-11-25 20:51:29.253774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.156 [2024-11-25 20:51:29.256594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.156 [2024-11-25 20:51:29.256961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:21.156 [2024-11-25 20:51:29.257060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:33:21.156 [2024-11-25 20:51:29.257110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.156 [2024-11-25 20:51:29.262673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.156 [2024-11-25 20:51:29.262810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:21.156 [2024-11-25 20:51:29.262831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:33:21.156 [2024-11-25 20:51:29.262841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.415 [2024-11-25 20:51:29.299671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.415 [2024-11-25 20:51:29.299831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:21.415 [2024-11-25 20:51:29.299854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.803 ms 00:33:21.415 [2024-11-25 20:51:29.299865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.415 [2024-11-25 20:51:29.320230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.415 [2024-11-25 20:51:29.320266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:21.415 [2024-11-25 20:51:29.320280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.358 ms 00:33:21.415 [2024-11-25 20:51:29.320306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.415 [2024-11-25 20:51:29.322387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.416 [2024-11-25 20:51:29.322423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:21.416 [2024-11-25 20:51:29.322437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:33:21.416 [2024-11-25 20:51:29.322448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.416 [2024-11-25 20:51:29.357388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.416 [2024-11-25 20:51:29.357422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:21.416 [2024-11-25 20:51:29.357434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.980 ms 00:33:21.416 [2024-11-25 20:51:29.357443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.416 [2024-11-25 20:51:29.391906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.416 [2024-11-25 20:51:29.392040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:21.416 [2024-11-25 20:51:29.392083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.483 ms 00:33:21.416 [2024-11-25 20:51:29.392100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.416 [2024-11-25 20:51:29.425888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.416 [2024-11-25 20:51:29.426044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:21.416 [2024-11-25 20:51:29.426064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.801 ms 00:33:21.416 [2024-11-25 20:51:29.426090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.416 [2024-11-25 20:51:29.460525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.416 [2024-11-25 20:51:29.460673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:21.416 [2024-11-25 20:51:29.460694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.413 ms 00:33:21.416 [2024-11-25 20:51:29.460703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.416 [2024-11-25 20:51:29.460742] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:21.416 [2024-11-25 20:51:29.460767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:21.416 [2024-11-25 20:51:29.460780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:33:21.416 [2024-11-25 20:51:29.460792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.460998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:21.416 [2024-11-25 20:51:29.461554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:21.417 [2024-11-25 20:51:29.461926] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:21.417 [2024-11-25 20:51:29.461952] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95033dac-05e9-4434-8c8c-ed39204e32c8 00:33:21.417 [2024-11-25 20:51:29.461965] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:33:21.417 [2024-11-25 20:51:29.461974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:21.417 [2024-11-25 20:51:29.461984] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:21.417 [2024-11-25 20:51:29.461994] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:21.417 [2024-11-25 20:51:29.462015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:21.417 [2024-11-25 20:51:29.462026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:21.417 [2024-11-25 20:51:29.462037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:21.417 [2024-11-25 20:51:29.462045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:21.417 [2024-11-25 20:51:29.462060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:21.417 [2024-11-25 20:51:29.462075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.417 [2024-11-25 20:51:29.462086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:21.417 [2024-11-25 20:51:29.462104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:33:21.417 [2024-11-25 20:51:29.462118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.417 [2024-11-25 20:51:29.482656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.417 [2024-11-25 20:51:29.482687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:21.417 [2024-11-25 20:51:29.482700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.491 ms 00:33:21.417 [2024-11-25 20:51:29.482710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.417 [2024-11-25 20:51:29.483259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.417 [2024-11-25 20:51:29.483280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:21.417 [2024-11-25 20:51:29.483290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:33:21.417 [2024-11-25 20:51:29.483300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.417 [2024-11-25 20:51:29.535021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.417 [2024-11-25 20:51:29.535054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:21.417 [2024-11-25 20:51:29.535067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.417 [2024-11-25 20:51:29.535094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.417 [2024-11-25 20:51:29.535152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.417 [2024-11-25 20:51:29.535169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:21.417 [2024-11-25 20:51:29.535180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.417 [2024-11-25 20:51:29.535190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.417 [2024-11-25 20:51:29.535253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.417 [2024-11-25 20:51:29.535266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:21.417 [2024-11-25 20:51:29.535277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.417 [2024-11-25 20:51:29.535287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.417 [2024-11-25 20:51:29.535305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.417 [2024-11-25 20:51:29.535316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:21.417 [2024-11-25 20:51:29.535332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.417 [2024-11-25 20:51:29.535356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.669382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.669597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:21.676 [2024-11-25 20:51:29.669627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.669640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.775100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.775299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:21.676 [2024-11-25 20:51:29.775364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.775376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.775504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.775518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:21.676 [2024-11-25 20:51:29.775530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.775540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.775592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.775604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:21.676 [2024-11-25 20:51:29.775616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.775630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.775761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.775776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:21.676 [2024-11-25 20:51:29.775788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.775798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.775841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.775854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:21.676 [2024-11-25 20:51:29.775865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.775875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.775925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.775937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:21.676 [2024-11-25 20:51:29.775947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.775958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.776007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.676 [2024-11-25 20:51:29.776019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:21.676 [2024-11-25 20:51:29.776030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.676 [2024-11-25 20:51:29.776040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.676 [2024-11-25 20:51:29.776186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.432 ms, result 0 00:33:23.054 00:33:23.054 00:33:23.054 20:51:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:33:24.958 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:33:24.958 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:33:24.958 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:33:24.958 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81513 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81513 ']' 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81513 00:33:24.959 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81513) - No such process 00:33:24.959 Process with pid 81513 is not found 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81513 is not found' 00:33:24.959 20:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:33:25.217 Remove shared memory files 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:25.217 00:33:25.217 real 3m40.826s 00:33:25.217 user 4m12.670s 00:33:25.217 sys 0m42.773s 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.217 20:51:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:25.217 ************************************ 00:33:25.217 END TEST ftl_dirty_shutdown 00:33:25.217 ************************************ 00:33:25.217 20:51:33 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:33:25.217 20:51:33 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:25.217 20:51:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.217 20:51:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:25.217 ************************************ 00:33:25.217 START TEST ftl_upgrade_shutdown 00:33:25.217 ************************************ 00:33:25.217 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:33:25.476 * Looking for test storage... 00:33:25.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.477 --rc genhtml_branch_coverage=1 00:33:25.477 --rc genhtml_function_coverage=1 00:33:25.477 --rc genhtml_legend=1 00:33:25.477 --rc geninfo_all_blocks=1 00:33:25.477 --rc geninfo_unexecuted_blocks=1 00:33:25.477 00:33:25.477 ' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.477 --rc genhtml_branch_coverage=1 00:33:25.477 --rc genhtml_function_coverage=1 00:33:25.477 --rc genhtml_legend=1 00:33:25.477 --rc geninfo_all_blocks=1 00:33:25.477 --rc geninfo_unexecuted_blocks=1 00:33:25.477 00:33:25.477 ' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.477 --rc genhtml_branch_coverage=1 00:33:25.477 --rc genhtml_function_coverage=1 00:33:25.477 --rc genhtml_legend=1 00:33:25.477 --rc geninfo_all_blocks=1 00:33:25.477 --rc geninfo_unexecuted_blocks=1 00:33:25.477 00:33:25.477 ' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.477 --rc genhtml_branch_coverage=1 00:33:25.477 --rc genhtml_function_coverage=1 00:33:25.477 --rc genhtml_legend=1 00:33:25.477 --rc geninfo_all_blocks=1 00:33:25.477 --rc geninfo_unexecuted_blocks=1 00:33:25.477 00:33:25.477 ' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83856 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83856 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83856 ']' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.477 20:51:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:25.737 [2024-11-25 20:51:33.652233] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:33:25.737 [2024-11-25 20:51:33.652399] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83856 ] 00:33:25.737 [2024-11-25 20:51:33.837190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.996 [2024-11-25 20:51:33.965913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:33:26.934 20:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:27.194 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:27.453 { 00:33:27.453 "name": "basen1", 00:33:27.453 "aliases": [ 00:33:27.453 "ffd7d2b0-bd77-413e-a3be-2bdf727e78ad" 00:33:27.453 ], 00:33:27.453 "product_name": "NVMe disk", 00:33:27.453 "block_size": 4096, 00:33:27.453 "num_blocks": 1310720, 00:33:27.453 "uuid": "ffd7d2b0-bd77-413e-a3be-2bdf727e78ad", 00:33:27.453 "numa_id": -1, 00:33:27.453 "assigned_rate_limits": { 00:33:27.453 "rw_ios_per_sec": 0, 00:33:27.453 "rw_mbytes_per_sec": 0, 00:33:27.453 "r_mbytes_per_sec": 0, 00:33:27.453 "w_mbytes_per_sec": 0 00:33:27.453 }, 00:33:27.453 "claimed": true, 00:33:27.453 "claim_type": "read_many_write_one", 00:33:27.453 "zoned": false, 00:33:27.453 "supported_io_types": { 00:33:27.453 "read": true, 00:33:27.453 "write": true, 00:33:27.453 "unmap": true, 00:33:27.453 "flush": true, 00:33:27.453 "reset": true, 00:33:27.453 "nvme_admin": true, 00:33:27.453 "nvme_io": true, 00:33:27.453 "nvme_io_md": false, 00:33:27.453 "write_zeroes": true, 00:33:27.453 "zcopy": false, 00:33:27.453 "get_zone_info": false, 00:33:27.453 "zone_management": false, 00:33:27.453 "zone_append": false, 00:33:27.453 "compare": true, 00:33:27.453 "compare_and_write": false, 00:33:27.453 "abort": true, 00:33:27.453 "seek_hole": false, 00:33:27.453 "seek_data": false, 00:33:27.453 "copy": true, 00:33:27.453 "nvme_iov_md": false 00:33:27.453 }, 00:33:27.453 "driver_specific": { 00:33:27.453 "nvme": [ 00:33:27.453 { 00:33:27.453 "pci_address": "0000:00:11.0", 00:33:27.453 "trid": { 00:33:27.453 "trtype": "PCIe", 00:33:27.453 "traddr": "0000:00:11.0" 00:33:27.453 }, 00:33:27.453 "ctrlr_data": { 00:33:27.453 "cntlid": 0, 00:33:27.453 "vendor_id": "0x1b36", 00:33:27.453 "model_number": "QEMU NVMe Ctrl", 00:33:27.453 "serial_number": "12341", 00:33:27.453 "firmware_revision": "8.0.0", 00:33:27.453 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:27.453 "oacs": { 00:33:27.453 "security": 0, 00:33:27.453 "format": 1, 00:33:27.453 "firmware": 0, 00:33:27.453 "ns_manage": 1 00:33:27.453 }, 00:33:27.453 "multi_ctrlr": false, 00:33:27.453 "ana_reporting": false 00:33:27.453 }, 00:33:27.453 "vs": { 00:33:27.453 "nvme_version": "1.4" 00:33:27.453 }, 00:33:27.453 "ns_data": { 00:33:27.453 "id": 1, 00:33:27.453 "can_share": false 00:33:27.453 } 00:33:27.453 } 00:33:27.453 ], 00:33:27.453 "mp_policy": "active_passive" 00:33:27.453 } 00:33:27.453 } 00:33:27.453 ]' 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:27.453 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:27.712 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=81ff69c0-3707-4316-a4bf-e83025509f3e 00:33:27.712 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:33:27.712 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81ff69c0-3707-4316-a4bf-e83025509f3e 00:33:27.972 20:51:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:33:28.231 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=207b76b3-e039-4e89-ae46-3e10378bdc7b 00:33:28.231 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 207b76b3-e039-4e89-ae46-3e10378bdc7b 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ff05b9d0-36ac-46bf-9083-21937b619d6d 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ff05b9d0-36ac-46bf-9083-21937b619d6d ]] 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ff05b9d0-36ac-46bf-9083-21937b619d6d 5120 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ff05b9d0-36ac-46bf-9083-21937b619d6d 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ff05b9d0-36ac-46bf-9083-21937b619d6d 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ff05b9d0-36ac-46bf-9083-21937b619d6d 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff05b9d0-36ac-46bf-9083-21937b619d6d 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:28.490 { 00:33:28.490 "name": "ff05b9d0-36ac-46bf-9083-21937b619d6d", 00:33:28.490 "aliases": [ 00:33:28.490 "lvs/basen1p0" 00:33:28.490 ], 00:33:28.490 "product_name": "Logical Volume", 00:33:28.490 "block_size": 4096, 00:33:28.490 "num_blocks": 5242880, 00:33:28.490 "uuid": "ff05b9d0-36ac-46bf-9083-21937b619d6d", 00:33:28.490 "assigned_rate_limits": { 00:33:28.490 "rw_ios_per_sec": 0, 00:33:28.490 "rw_mbytes_per_sec": 0, 00:33:28.490 "r_mbytes_per_sec": 0, 00:33:28.490 "w_mbytes_per_sec": 0 00:33:28.490 }, 00:33:28.490 "claimed": false, 00:33:28.490 "zoned": false, 00:33:28.490 "supported_io_types": { 00:33:28.490 "read": true, 00:33:28.490 "write": true, 00:33:28.490 "unmap": true, 00:33:28.490 "flush": false, 00:33:28.490 "reset": true, 00:33:28.490 "nvme_admin": false, 00:33:28.490 "nvme_io": false, 00:33:28.490 "nvme_io_md": false, 00:33:28.490 "write_zeroes": true, 00:33:28.490 "zcopy": false, 00:33:28.490 "get_zone_info": false, 00:33:28.490 "zone_management": false, 00:33:28.490 "zone_append": false, 00:33:28.490 "compare": false, 00:33:28.490 "compare_and_write": false, 00:33:28.490 "abort": false, 00:33:28.490 "seek_hole": true, 00:33:28.490 "seek_data": true, 00:33:28.490 "copy": false, 00:33:28.490 "nvme_iov_md": false 00:33:28.490 }, 00:33:28.490 "driver_specific": { 00:33:28.490 "lvol": { 00:33:28.490 "lvol_store_uuid": "207b76b3-e039-4e89-ae46-3e10378bdc7b", 00:33:28.490 "base_bdev": "basen1", 00:33:28.490 "thin_provision": true, 00:33:28.490 "num_allocated_clusters": 0, 00:33:28.490 "snapshot": false, 00:33:28.490 "clone": false, 00:33:28.490 "esnap_clone": false 00:33:28.490 } 00:33:28.490 } 00:33:28.490 } 00:33:28.490 ]' 00:33:28.490 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:28.749 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:33:29.008 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:33:29.008 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:33:29.008 20:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:33:29.008 20:51:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:33:29.008 20:51:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:33:29.008 20:51:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ff05b9d0-36ac-46bf-9083-21937b619d6d -c cachen1p0 --l2p_dram_limit 2 00:33:29.267 [2024-11-25 20:51:37.324432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.267 [2024-11-25 20:51:37.324490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:29.267 [2024-11-25 20:51:37.324510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:29.267 [2024-11-25 20:51:37.324522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.267 [2024-11-25 20:51:37.324604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.267 [2024-11-25 20:51:37.324617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:29.267 [2024-11-25 20:51:37.324630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:29.267 [2024-11-25 20:51:37.324641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.267 [2024-11-25 20:51:37.324666] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:29.267 [2024-11-25 20:51:37.325787] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:29.267 [2024-11-25 20:51:37.325830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.267 [2024-11-25 20:51:37.325842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:29.267 [2024-11-25 20:51:37.325857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.166 ms 00:33:29.267 [2024-11-25 20:51:37.325867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.267 [2024-11-25 20:51:37.325961] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 3f02bbe4-bccb-4ea0-8852-d5a1565be431 00:33:29.267 [2024-11-25 20:51:37.328389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.267 [2024-11-25 20:51:37.328430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:33:29.267 [2024-11-25 20:51:37.328443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:29.267 [2024-11-25 20:51:37.328457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.267 [2024-11-25 20:51:37.342022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.267 [2024-11-25 20:51:37.342065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:29.267 [2024-11-25 20:51:37.342079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.513 ms 00:33:29.267 [2024-11-25 20:51:37.342093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.267 [2024-11-25 20:51:37.342151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.267 [2024-11-25 20:51:37.342169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:29.268 [2024-11-25 20:51:37.342180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:33:29.268 [2024-11-25 20:51:37.342198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.268 [2024-11-25 20:51:37.342265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.268 [2024-11-25 20:51:37.342281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:29.268 [2024-11-25 20:51:37.342295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:29.268 [2024-11-25 20:51:37.342312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.268 [2024-11-25 20:51:37.342361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:29.268 [2024-11-25 20:51:37.348663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.268 [2024-11-25 20:51:37.348712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:29.268 [2024-11-25 20:51:37.348746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.319 ms 00:33:29.268 [2024-11-25 20:51:37.348757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.268 [2024-11-25 20:51:37.348793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.268 [2024-11-25 20:51:37.348805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:29.268 [2024-11-25 20:51:37.348819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:29.268 [2024-11-25 20:51:37.348829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.268 [2024-11-25 20:51:37.348869] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:33:29.268 [2024-11-25 20:51:37.349004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:29.268 [2024-11-25 20:51:37.349027] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:29.268 [2024-11-25 20:51:37.349046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:29.268 [2024-11-25 20:51:37.349071] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349100] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:29.268 [2024-11-25 20:51:37.349125] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:29.268 [2024-11-25 20:51:37.349142] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:29.268 [2024-11-25 20:51:37.349152] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:29.268 [2024-11-25 20:51:37.349166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.268 [2024-11-25 20:51:37.349177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:29.268 [2024-11-25 20:51:37.349199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:33:29.268 [2024-11-25 20:51:37.349210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.268 [2024-11-25 20:51:37.349300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.268 [2024-11-25 20:51:37.349361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:29.268 [2024-11-25 20:51:37.349376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:29.268 [2024-11-25 20:51:37.349386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.268 [2024-11-25 20:51:37.349505] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:29.268 [2024-11-25 20:51:37.349521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:29.268 [2024-11-25 20:51:37.349536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:29.268 [2024-11-25 20:51:37.349570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:29.268 [2024-11-25 20:51:37.349593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:29.268 [2024-11-25 20:51:37.349605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:29.268 [2024-11-25 20:51:37.349615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:29.268 [2024-11-25 20:51:37.349638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:29.268 [2024-11-25 20:51:37.349650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:29.268 [2024-11-25 20:51:37.349672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:29.268 [2024-11-25 20:51:37.349681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:29.268 [2024-11-25 20:51:37.349707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:29.268 [2024-11-25 20:51:37.349721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:29.268 [2024-11-25 20:51:37.349743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:29.268 [2024-11-25 20:51:37.349752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:29.268 [2024-11-25 20:51:37.349773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:29.268 [2024-11-25 20:51:37.349785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:29.268 [2024-11-25 20:51:37.349807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:29.268 [2024-11-25 20:51:37.349826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:29.268 [2024-11-25 20:51:37.349848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:29.268 [2024-11-25 20:51:37.349860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:29.268 [2024-11-25 20:51:37.349885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:29.268 [2024-11-25 20:51:37.349895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:29.268 [2024-11-25 20:51:37.349917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:29.268 [2024-11-25 20:51:37.349929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:29.268 [2024-11-25 20:51:37.349951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.349973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:29.268 [2024-11-25 20:51:37.349983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:29.268 [2024-11-25 20:51:37.349996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.350005] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:29.268 [2024-11-25 20:51:37.350018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:29.268 [2024-11-25 20:51:37.350029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:29.268 [2024-11-25 20:51:37.350044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.268 [2024-11-25 20:51:37.350054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:29.268 [2024-11-25 20:51:37.350071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:29.268 [2024-11-25 20:51:37.350080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:29.268 [2024-11-25 20:51:37.350093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:29.268 [2024-11-25 20:51:37.350102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:29.268 [2024-11-25 20:51:37.350114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:29.268 [2024-11-25 20:51:37.350130] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:29.268 [2024-11-25 20:51:37.350151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:29.268 [2024-11-25 20:51:37.350178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:29.268 [2024-11-25 20:51:37.350213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:29.268 [2024-11-25 20:51:37.350227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:29.268 [2024-11-25 20:51:37.350238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:29.268 [2024-11-25 20:51:37.350251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:29.268 [2024-11-25 20:51:37.350314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:29.269 [2024-11-25 20:51:37.350339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:29.269 [2024-11-25 20:51:37.350350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:29.269 [2024-11-25 20:51:37.350365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:29.269 [2024-11-25 20:51:37.350378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:29.269 [2024-11-25 20:51:37.350392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:29.269 [2024-11-25 20:51:37.350406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:29.269 [2024-11-25 20:51:37.350420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:29.269 [2024-11-25 20:51:37.350432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.269 [2024-11-25 20:51:37.350447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:29.269 [2024-11-25 20:51:37.350458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.992 ms 00:33:29.269 [2024-11-25 20:51:37.350471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.269 [2024-11-25 20:51:37.350522] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:29.269 [2024-11-25 20:51:37.350543] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:33.459 [2024-11-25 20:51:40.814359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.459 [2024-11-25 20:51:40.814441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:33.459 [2024-11-25 20:51:40.814461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3469.455 ms 00:33:33.459 [2024-11-25 20:51:40.814475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.459 [2024-11-25 20:51:40.860056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.459 [2024-11-25 20:51:40.860117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:33.459 [2024-11-25 20:51:40.860134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.329 ms 00:33:33.459 [2024-11-25 20:51:40.860148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.459 [2024-11-25 20:51:40.860258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.459 [2024-11-25 20:51:40.860277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:33.459 [2024-11-25 20:51:40.860289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:33.459 [2024-11-25 20:51:40.860310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.459 [2024-11-25 20:51:40.906719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.459 [2024-11-25 20:51:40.906773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:33.459 [2024-11-25 20:51:40.906790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.432 ms 00:33:33.459 [2024-11-25 20:51:40.906804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.459 [2024-11-25 20:51:40.906853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.459 [2024-11-25 20:51:40.906867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:33.459 [2024-11-25 20:51:40.906878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:33.459 [2024-11-25 20:51:40.906891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.459 [2024-11-25 20:51:40.907704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.459 [2024-11-25 20:51:40.907732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:33.460 [2024-11-25 20:51:40.907756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.759 ms 00:33:33.460 [2024-11-25 20:51:40.907771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:40.907817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:40.907831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:33.460 [2024-11-25 20:51:40.907846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:33.460 [2024-11-25 20:51:40.907862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:40.931463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:40.931508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:33.460 [2024-11-25 20:51:40.931539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.616 ms 00:33:33.460 [2024-11-25 20:51:40.931553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:40.946147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:33.460 [2024-11-25 20:51:40.947897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:40.947925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:33.460 [2024-11-25 20:51:40.947941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.254 ms 00:33:33.460 [2024-11-25 20:51:40.947952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:40.993679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:40.993742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:33:33.460 [2024-11-25 20:51:40.993763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.762 ms 00:33:33.460 [2024-11-25 20:51:40.993775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:40.993902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:40.993919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:33.460 [2024-11-25 20:51:40.993939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:33:33.460 [2024-11-25 20:51:40.993950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.029163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.029199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:33:33.460 [2024-11-25 20:51:41.029218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.212 ms 00:33:33.460 [2024-11-25 20:51:41.029229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.065407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.065440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:33:33.460 [2024-11-25 20:51:41.065457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.184 ms 00:33:33.460 [2024-11-25 20:51:41.065467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.066187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.066204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:33.460 [2024-11-25 20:51:41.066218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.677 ms 00:33:33.460 [2024-11-25 20:51:41.066233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.166928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.166987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:33:33.460 [2024-11-25 20:51:41.167029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.796 ms 00:33:33.460 [2024-11-25 20:51:41.167041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.204543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.204583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:33:33.460 [2024-11-25 20:51:41.204601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.471 ms 00:33:33.460 [2024-11-25 20:51:41.204611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.239804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.239838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:33:33.460 [2024-11-25 20:51:41.239855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.202 ms 00:33:33.460 [2024-11-25 20:51:41.239865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.274726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.274760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:33.460 [2024-11-25 20:51:41.274778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.872 ms 00:33:33.460 [2024-11-25 20:51:41.274788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.274837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.274848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:33.460 [2024-11-25 20:51:41.274866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:33.460 [2024-11-25 20:51:41.274876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.274988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.460 [2024-11-25 20:51:41.275005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:33.460 [2024-11-25 20:51:41.275018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:33:33.460 [2024-11-25 20:51:41.275028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.460 [2024-11-25 20:51:41.276478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3957.943 ms, result 0 00:33:33.460 { 00:33:33.460 "name": "ftl", 00:33:33.460 "uuid": "3f02bbe4-bccb-4ea0-8852-d5a1565be431" 00:33:33.460 } 00:33:33.460 20:51:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:33:33.460 [2024-11-25 20:51:41.494939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.460 20:51:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:33:33.719 20:51:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:33:33.978 [2024-11-25 20:51:41.902706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:33.978 20:51:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:33:33.978 [2024-11-25 20:51:42.101362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:34.238 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:34.498 Fill FTL, iteration 1 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83985 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83985 /var/tmp/spdk.tgt.sock 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83985 ']' 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.498 20:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:34.498 [2024-11-25 20:51:42.560432] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:33:34.498 [2024-11-25 20:51:42.560583] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83985 ] 00:33:34.757 [2024-11-25 20:51:42.746199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.757 [2024-11-25 20:51:42.878968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.136 20:51:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.136 20:51:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:36.136 20:51:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:33:36.136 ftln1 00:33:36.136 20:51:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:33:36.136 20:51:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83985 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83985 ']' 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83985 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83985 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:36.395 killing process with pid 83985 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83985' 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83985 00:33:36.395 20:51:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83985 00:33:38.928 20:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:33:38.929 20:51:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:39.187 [2024-11-25 20:51:47.143856] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:33:39.187 [2024-11-25 20:51:47.143991] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84038 ] 00:33:39.446 [2024-11-25 20:51:47.330838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.446 [2024-11-25 20:51:47.472451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.348  [2024-11-25T20:51:50.053Z] Copying: 246/1024 [MB] (246 MBps) [2024-11-25T20:51:50.991Z] Copying: 496/1024 [MB] (250 MBps) [2024-11-25T20:51:52.370Z] Copying: 746/1024 [MB] (250 MBps) [2024-11-25T20:51:52.370Z] Copying: 994/1024 [MB] (248 MBps) [2024-11-25T20:51:53.760Z] Copying: 1024/1024 [MB] (average 248 MBps) 00:33:45.624 00:33:45.624 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:45.624 Calculate MD5 checksum, iteration 1 00:33:45.624 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:45.624 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:45.624 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:45.624 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:45.624 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:45.625 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:45.625 20:51:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:45.625 [2024-11-25 20:51:53.437438] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:33:45.625 [2024-11-25 20:51:53.437565] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84102 ] 00:33:45.625 [2024-11-25 20:51:53.621081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.883 [2024-11-25 20:51:53.764158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.260  [2024-11-25T20:51:55.964Z] Copying: 702/1024 [MB] (702 MBps) [2024-11-25T20:51:56.901Z] Copying: 1024/1024 [MB] (average 695 MBps) 00:33:48.765 00:33:48.765 20:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:48.765 20:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:50.677 Fill FTL, iteration 2 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=88ca0ddf445409082ced5cc12cc82fac 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:50.677 20:51:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:50.677 [2024-11-25 20:51:58.572597] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:33:50.677 [2024-11-25 20:51:58.572748] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84159 ] 00:33:50.677 [2024-11-25 20:51:58.754552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.936 [2024-11-25 20:51:58.894757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.343  [2024-11-25T20:52:01.413Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-25T20:52:02.485Z] Copying: 487/1024 [MB] (245 MBps) [2024-11-25T20:52:03.422Z] Copying: 732/1024 [MB] (245 MBps) [2024-11-25T20:52:03.680Z] Copying: 977/1024 [MB] (245 MBps) [2024-11-25T20:52:05.057Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:33:56.921 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:56.921 Calculate MD5 checksum, iteration 2 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:56.921 20:52:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:56.921 [2024-11-25 20:52:04.918869] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:33:56.922 [2024-11-25 20:52:04.919204] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84229 ] 00:33:57.181 [2024-11-25 20:52:05.105912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.181 [2024-11-25 20:52:05.247738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.089  [2024-11-25T20:52:07.484Z] Copying: 697/1024 [MB] (697 MBps) [2024-11-25T20:52:09.408Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:34:01.272 00:34:01.272 20:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:34:01.272 20:52:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:02.650 20:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:34:02.650 20:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a964cd5e771cccbd97a3f9b51982922b 00:34:02.650 20:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:34:02.650 20:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:02.650 20:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:02.909 [2024-11-25 20:52:10.918127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.909 [2024-11-25 20:52:10.918189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:02.909 [2024-11-25 20:52:10.918207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:34:02.909 [2024-11-25 20:52:10.918218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.909 [2024-11-25 20:52:10.918273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.909 [2024-11-25 20:52:10.918285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:02.909 [2024-11-25 20:52:10.918302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:02.909 [2024-11-25 20:52:10.918314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.909 [2024-11-25 20:52:10.918337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.909 [2024-11-25 20:52:10.918359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:02.909 [2024-11-25 20:52:10.918376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:02.909 [2024-11-25 20:52:10.918387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.909 [2024-11-25 20:52:10.918458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.339 ms, result 0 00:34:02.909 true 00:34:02.909 20:52:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:03.169 { 00:34:03.169 "name": "ftl", 00:34:03.169 "properties": [ 00:34:03.169 { 00:34:03.169 "name": "superblock_version", 00:34:03.169 "value": 5, 00:34:03.169 "read-only": true 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "name": "base_device", 00:34:03.169 "bands": [ 00:34:03.169 { 00:34:03.169 "id": 0, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 1, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 2, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 3, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 4, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 5, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 6, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 7, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 8, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 9, 00:34:03.169 "state": "FREE", 00:34:03.169 "validity": 0.0 00:34:03.169 }, 00:34:03.169 { 00:34:03.169 "id": 10, 00:34:03.169 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 11, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 12, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 13, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 14, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 15, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 16, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 17, 00:34:03.170 "state": "FREE", 00:34:03.170 "validity": 0.0 00:34:03.170 } 00:34:03.170 ], 00:34:03.170 "read-only": true 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "name": "cache_device", 00:34:03.170 "type": "bdev", 00:34:03.170 "chunks": [ 00:34:03.170 { 00:34:03.170 "id": 0, 00:34:03.170 "state": "INACTIVE", 00:34:03.170 "utilization": 0.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 1, 00:34:03.170 "state": "CLOSED", 00:34:03.170 "utilization": 1.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 2, 00:34:03.170 "state": "CLOSED", 00:34:03.170 "utilization": 1.0 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 3, 00:34:03.170 "state": "OPEN", 00:34:03.170 "utilization": 0.001953125 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "id": 4, 00:34:03.170 "state": "OPEN", 00:34:03.170 "utilization": 0.0 00:34:03.170 } 00:34:03.170 ], 00:34:03.170 "read-only": true 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "name": "verbose_mode", 00:34:03.170 "value": true, 00:34:03.170 "unit": "", 00:34:03.170 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:03.170 }, 00:34:03.170 { 00:34:03.170 "name": "prep_upgrade_on_shutdown", 00:34:03.170 "value": false, 00:34:03.170 "unit": "", 00:34:03.170 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:03.170 } 00:34:03.170 ] 00:34:03.170 } 00:34:03.170 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:34:03.429 [2024-11-25 20:52:11.338105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.429 [2024-11-25 20:52:11.338161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:03.429 [2024-11-25 20:52:11.338179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:03.429 [2024-11-25 20:52:11.338190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.429 [2024-11-25 20:52:11.338241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.429 [2024-11-25 20:52:11.338253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:03.429 [2024-11-25 20:52:11.338265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:03.429 [2024-11-25 20:52:11.338275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.429 [2024-11-25 20:52:11.338297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.429 [2024-11-25 20:52:11.338308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:03.429 [2024-11-25 20:52:11.338319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:03.429 [2024-11-25 20:52:11.338341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.429 [2024-11-25 20:52:11.338410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.295 ms, result 0 00:34:03.429 true 00:34:03.429 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:34:03.429 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:03.429 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:34:03.688 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:34:03.688 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:34:03.688 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:03.688 [2024-11-25 20:52:11.758033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.688 [2024-11-25 20:52:11.758107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:03.688 [2024-11-25 20:52:11.758124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:03.688 [2024-11-25 20:52:11.758136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.688 [2024-11-25 20:52:11.758161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.688 [2024-11-25 20:52:11.758172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:03.688 [2024-11-25 20:52:11.758183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:03.688 [2024-11-25 20:52:11.758192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.688 [2024-11-25 20:52:11.758213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.688 [2024-11-25 20:52:11.758224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:03.688 [2024-11-25 20:52:11.758235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:03.688 [2024-11-25 20:52:11.758244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.688 [2024-11-25 20:52:11.758310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:34:03.688 true 00:34:03.689 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:03.948 { 00:34:03.948 "name": "ftl", 00:34:03.948 "properties": [ 00:34:03.948 { 00:34:03.948 "name": "superblock_version", 00:34:03.948 "value": 5, 00:34:03.948 "read-only": true 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "name": "base_device", 00:34:03.948 "bands": [ 00:34:03.948 { 00:34:03.948 "id": 0, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 1, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 2, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 3, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 4, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 5, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 6, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 7, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 8, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 9, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 10, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 11, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 12, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 13, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 14, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 15, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 16, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 17, 00:34:03.948 "state": "FREE", 00:34:03.948 "validity": 0.0 00:34:03.948 } 00:34:03.948 ], 00:34:03.948 "read-only": true 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "name": "cache_device", 00:34:03.948 "type": "bdev", 00:34:03.948 "chunks": [ 00:34:03.948 { 00:34:03.948 "id": 0, 00:34:03.948 "state": "INACTIVE", 00:34:03.948 "utilization": 0.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 1, 00:34:03.948 "state": "CLOSED", 00:34:03.948 "utilization": 1.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 2, 00:34:03.948 "state": "CLOSED", 00:34:03.948 "utilization": 1.0 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 3, 00:34:03.948 "state": "OPEN", 00:34:03.948 "utilization": 0.001953125 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "id": 4, 00:34:03.948 "state": "OPEN", 00:34:03.948 "utilization": 0.0 00:34:03.948 } 00:34:03.948 ], 00:34:03.948 "read-only": true 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "name": "verbose_mode", 00:34:03.948 "value": true, 00:34:03.948 "unit": "", 00:34:03.948 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:03.948 }, 00:34:03.948 { 00:34:03.948 "name": "prep_upgrade_on_shutdown", 00:34:03.948 "value": true, 00:34:03.948 "unit": "", 00:34:03.948 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:03.948 } 00:34:03.948 ] 00:34:03.948 } 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83856 ]] 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83856 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83856 ']' 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83856 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.948 20:52:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83856 00:34:03.948 20:52:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.948 killing process with pid 83856 00:34:03.948 20:52:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.948 20:52:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83856' 00:34:03.948 20:52:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83856 00:34:03.948 20:52:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83856 00:34:05.326 [2024-11-25 20:52:13.219456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:05.326 [2024-11-25 20:52:13.239885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.326 [2024-11-25 20:52:13.239926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:05.326 [2024-11-25 20:52:13.239942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:05.326 [2024-11-25 20:52:13.239953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:05.326 [2024-11-25 20:52:13.239993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:05.326 [2024-11-25 20:52:13.244463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:05.326 [2024-11-25 20:52:13.244493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:05.326 [2024-11-25 20:52:13.244522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.460 ms 00:34:05.326 [2024-11-25 20:52:13.244533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.675345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.675431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:13.451 [2024-11-25 20:52:20.675473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7442.835 ms 00:34:13.451 [2024-11-25 20:52:20.675484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.676510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.676546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:13.451 [2024-11-25 20:52:20.676561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.006 ms 00:34:13.451 [2024-11-25 20:52:20.676572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.677478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.677501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:13.451 [2024-11-25 20:52:20.677514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.889 ms 00:34:13.451 [2024-11-25 20:52:20.677534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.692724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.692762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:13.451 [2024-11-25 20:52:20.692775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.155 ms 00:34:13.451 [2024-11-25 20:52:20.692785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.702219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.702259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:13.451 [2024-11-25 20:52:20.702280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.410 ms 00:34:13.451 [2024-11-25 20:52:20.702291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.702415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.702431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:13.451 [2024-11-25 20:52:20.702449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:34:13.451 [2024-11-25 20:52:20.702459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.716695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.716727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:13.451 [2024-11-25 20:52:20.716740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.241 ms 00:34:13.451 [2024-11-25 20:52:20.716749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.731231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.731264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:13.451 [2024-11-25 20:52:20.731276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.469 ms 00:34:13.451 [2024-11-25 20:52:20.731284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.745758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.745794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:13.451 [2024-11-25 20:52:20.745807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.459 ms 00:34:13.451 [2024-11-25 20:52:20.745817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.760319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.451 [2024-11-25 20:52:20.760359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:13.451 [2024-11-25 20:52:20.760372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.437 ms 00:34:13.451 [2024-11-25 20:52:20.760381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.451 [2024-11-25 20:52:20.760402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:13.451 [2024-11-25 20:52:20.760432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:13.451 [2024-11-25 20:52:20.760446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:13.451 [2024-11-25 20:52:20.760457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:13.451 [2024-11-25 20:52:20.760468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:13.451 [2024-11-25 20:52:20.760679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:13.451 [2024-11-25 20:52:20.760689] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3f02bbe4-bccb-4ea0-8852-d5a1565be431 00:34:13.451 [2024-11-25 20:52:20.760701] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:13.451 [2024-11-25 20:52:20.760712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:34:13.451 [2024-11-25 20:52:20.760723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:34:13.451 [2024-11-25 20:52:20.760734] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:34:13.451 [2024-11-25 20:52:20.760751] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:13.451 [2024-11-25 20:52:20.760765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:13.451 [2024-11-25 20:52:20.760789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:13.452 [2024-11-25 20:52:20.760799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:13.452 [2024-11-25 20:52:20.760808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:13.452 [2024-11-25 20:52:20.760819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.452 [2024-11-25 20:52:20.760830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:13.452 [2024-11-25 20:52:20.760841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:34:13.452 [2024-11-25 20:52:20.760857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.781930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.452 [2024-11-25 20:52:20.781965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:13.452 [2024-11-25 20:52:20.782001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.074 ms 00:34:13.452 [2024-11-25 20:52:20.782012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.782622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:13.452 [2024-11-25 20:52:20.782644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:13.452 [2024-11-25 20:52:20.782656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:34:13.452 [2024-11-25 20:52:20.782667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.850075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:20.850112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:13.452 [2024-11-25 20:52:20.850148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:20.850159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.850194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:20.850205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:13.452 [2024-11-25 20:52:20.850216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:20.850226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.850322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:20.850337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:13.452 [2024-11-25 20:52:20.850359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:20.850374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.850394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:20.850404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:13.452 [2024-11-25 20:52:20.850414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:20.850425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:20.977565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:20.977627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:13.452 [2024-11-25 20:52:20.977649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:20.977677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.078632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.078710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:13.452 [2024-11-25 20:52:21.078726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.078738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.078872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.078886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:13.452 [2024-11-25 20:52:21.078898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.078909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.078970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.078982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:13.452 [2024-11-25 20:52:21.078993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.079004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.079136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.079150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:13.452 [2024-11-25 20:52:21.079162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.079173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.079219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.079231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:13.452 [2024-11-25 20:52:21.079242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.079253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.079303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.079315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:13.452 [2024-11-25 20:52:21.079327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.079337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.079419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:13.452 [2024-11-25 20:52:21.079435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:13.452 [2024-11-25 20:52:21.079447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:13.452 [2024-11-25 20:52:21.079458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:13.452 [2024-11-25 20:52:21.079612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7852.428 ms, result 0 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84444 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84444 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84444 ']' 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.744 20:52:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:17.004 [2024-11-25 20:52:24.937157] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:17.004 [2024-11-25 20:52:24.937312] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84444 ] 00:34:17.004 [2024-11-25 20:52:25.123191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.263 [2024-11-25 20:52:25.255347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.641 [2024-11-25 20:52:26.368491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:18.641 [2024-11-25 20:52:26.368575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:18.641 [2024-11-25 20:52:26.516361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.516409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:18.641 [2024-11-25 20:52:26.516435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:18.641 [2024-11-25 20:52:26.516450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.516526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.516547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:18.641 [2024-11-25 20:52:26.516563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:34:18.641 [2024-11-25 20:52:26.516577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.516620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:18.641 [2024-11-25 20:52:26.517624] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:18.641 [2024-11-25 20:52:26.517668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.517688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:18.641 [2024-11-25 20:52:26.517707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:34:18.641 [2024-11-25 20:52:26.517723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.520402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:18.641 [2024-11-25 20:52:26.541215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.541259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:18.641 [2024-11-25 20:52:26.541298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.847 ms 00:34:18.641 [2024-11-25 20:52:26.541314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.541412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.541436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:18.641 [2024-11-25 20:52:26.541455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:34:18.641 [2024-11-25 20:52:26.541470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.554273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.554306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:18.641 [2024-11-25 20:52:26.554352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.713 ms 00:34:18.641 [2024-11-25 20:52:26.554369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.554463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.554484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:18.641 [2024-11-25 20:52:26.554502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:34:18.641 [2024-11-25 20:52:26.554520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.554604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.554630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:18.641 [2024-11-25 20:52:26.554648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:34:18.641 [2024-11-25 20:52:26.554664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.554723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:18.641 [2024-11-25 20:52:26.560843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.560880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:18.641 [2024-11-25 20:52:26.560907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.155 ms 00:34:18.641 [2024-11-25 20:52:26.560923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.560972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.641 [2024-11-25 20:52:26.560991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:18.641 [2024-11-25 20:52:26.561009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:18.641 [2024-11-25 20:52:26.561025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.641 [2024-11-25 20:52:26.561083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:18.642 [2024-11-25 20:52:26.561126] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:18.642 [2024-11-25 20:52:26.561180] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:18.642 [2024-11-25 20:52:26.561211] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:18.642 [2024-11-25 20:52:26.561349] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:18.642 [2024-11-25 20:52:26.561376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:18.642 [2024-11-25 20:52:26.561399] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:18.642 [2024-11-25 20:52:26.561420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:18.642 [2024-11-25 20:52:26.561448] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:18.642 [2024-11-25 20:52:26.561467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:18.642 [2024-11-25 20:52:26.561483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:18.642 [2024-11-25 20:52:26.561501] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:18.642 [2024-11-25 20:52:26.561517] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:18.642 [2024-11-25 20:52:26.561535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.642 [2024-11-25 20:52:26.561551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:18.642 [2024-11-25 20:52:26.561569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:34:18.642 [2024-11-25 20:52:26.561585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.642 [2024-11-25 20:52:26.561690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.642 [2024-11-25 20:52:26.561711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:18.642 [2024-11-25 20:52:26.561734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:34:18.642 [2024-11-25 20:52:26.561751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.642 [2024-11-25 20:52:26.561879] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:18.642 [2024-11-25 20:52:26.561903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:18.642 [2024-11-25 20:52:26.561921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:18.642 [2024-11-25 20:52:26.561940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.561957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:18.642 [2024-11-25 20:52:26.561972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.561990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:18.642 [2024-11-25 20:52:26.562006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:18.642 [2024-11-25 20:52:26.562023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:18.642 [2024-11-25 20:52:26.562037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:18.642 [2024-11-25 20:52:26.562070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:18.642 [2024-11-25 20:52:26.562085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:18.642 [2024-11-25 20:52:26.562116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:18.642 [2024-11-25 20:52:26.562132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:18.642 [2024-11-25 20:52:26.562163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:18.642 [2024-11-25 20:52:26.562178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:18.642 [2024-11-25 20:52:26.562209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:18.642 [2024-11-25 20:52:26.562225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:18.642 [2024-11-25 20:52:26.562270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:18.642 [2024-11-25 20:52:26.562286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:18.642 [2024-11-25 20:52:26.562317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:18.642 [2024-11-25 20:52:26.562345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:18.642 [2024-11-25 20:52:26.562380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:18.642 [2024-11-25 20:52:26.562395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:18.642 [2024-11-25 20:52:26.562432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:18.642 [2024-11-25 20:52:26.562447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:18.642 [2024-11-25 20:52:26.562477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:18.642 [2024-11-25 20:52:26.562522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:18.642 [2024-11-25 20:52:26.562567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:18.642 [2024-11-25 20:52:26.562583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562598] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:18.642 [2024-11-25 20:52:26.562614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:18.642 [2024-11-25 20:52:26.562631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:18.642 [2024-11-25 20:52:26.562671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:18.642 [2024-11-25 20:52:26.562688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:18.642 [2024-11-25 20:52:26.562703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:18.642 [2024-11-25 20:52:26.562719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:18.642 [2024-11-25 20:52:26.562735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:18.642 [2024-11-25 20:52:26.562751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:18.642 [2024-11-25 20:52:26.562769] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:18.642 [2024-11-25 20:52:26.562790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:18.642 [2024-11-25 20:52:26.562827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:18.642 [2024-11-25 20:52:26.562879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:18.642 [2024-11-25 20:52:26.562896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:18.642 [2024-11-25 20:52:26.562913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:18.642 [2024-11-25 20:52:26.562930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.562998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.563016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.563033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:18.642 [2024-11-25 20:52:26.563049] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:18.642 [2024-11-25 20:52:26.563068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.563086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:18.642 [2024-11-25 20:52:26.563104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:18.642 [2024-11-25 20:52:26.563122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:18.642 [2024-11-25 20:52:26.563138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:18.642 [2024-11-25 20:52:26.563157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.642 [2024-11-25 20:52:26.563175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:18.642 [2024-11-25 20:52:26.563192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.354 ms 00:34:18.642 [2024-11-25 20:52:26.563209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.643 [2024-11-25 20:52:26.563287] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:18.643 [2024-11-25 20:52:26.563316] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:34:22.831 [2024-11-25 20:52:30.077732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.077808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:34:22.831 [2024-11-25 20:52:30.077829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3520.149 ms 00:34:22.831 [2024-11-25 20:52:30.077865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.125251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.125308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:22.831 [2024-11-25 20:52:30.125340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.107 ms 00:34:22.831 [2024-11-25 20:52:30.125352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.125476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.125491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:22.831 [2024-11-25 20:52:30.125503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:34:22.831 [2024-11-25 20:52:30.125514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.176872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.176923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:22.831 [2024-11-25 20:52:30.176944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.361 ms 00:34:22.831 [2024-11-25 20:52:30.176971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.177028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.177040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:22.831 [2024-11-25 20:52:30.177052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:22.831 [2024-11-25 20:52:30.177062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.177918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.177943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:22.831 [2024-11-25 20:52:30.177956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:34:22.831 [2024-11-25 20:52:30.177973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.178025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.178037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:22.831 [2024-11-25 20:52:30.178048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:34:22.831 [2024-11-25 20:52:30.178059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.203464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.203508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:22.831 [2024-11-25 20:52:30.203539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.419 ms 00:34:22.831 [2024-11-25 20:52:30.203550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.223982] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:34:22.831 [2024-11-25 20:52:30.224024] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:22.831 [2024-11-25 20:52:30.224056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.224068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:34:22.831 [2024-11-25 20:52:30.224080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.398 ms 00:34:22.831 [2024-11-25 20:52:30.224091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.243345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.243383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:34:22.831 [2024-11-25 20:52:30.243398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.238 ms 00:34:22.831 [2024-11-25 20:52:30.243408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.260721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.260754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:34:22.831 [2024-11-25 20:52:30.260768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.277 ms 00:34:22.831 [2024-11-25 20:52:30.260778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.277767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.277799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:34:22.831 [2024-11-25 20:52:30.277812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.962 ms 00:34:22.831 [2024-11-25 20:52:30.277821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.278638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.278673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:22.831 [2024-11-25 20:52:30.278685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.691 ms 00:34:22.831 [2024-11-25 20:52:30.278697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.831 [2024-11-25 20:52:30.378391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.831 [2024-11-25 20:52:30.378476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:22.831 [2024-11-25 20:52:30.378495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.830 ms 00:34:22.831 [2024-11-25 20:52:30.378507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.388546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:22.832 [2024-11-25 20:52:30.389406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.389435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:22.832 [2024-11-25 20:52:30.389448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.858 ms 00:34:22.832 [2024-11-25 20:52:30.389460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.389559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.389573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:34:22.832 [2024-11-25 20:52:30.389586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:22.832 [2024-11-25 20:52:30.389597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.389669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.389683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:22.832 [2024-11-25 20:52:30.389695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:34:22.832 [2024-11-25 20:52:30.389705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.389731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.389747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:22.832 [2024-11-25 20:52:30.389758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:22.832 [2024-11-25 20:52:30.389768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.389810] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:22.832 [2024-11-25 20:52:30.389823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.389842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:22.832 [2024-11-25 20:52:30.389854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:34:22.832 [2024-11-25 20:52:30.389865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.424780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.424815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:34:22.832 [2024-11-25 20:52:30.424828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.947 ms 00:34:22.832 [2024-11-25 20:52:30.424840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.424942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:22.832 [2024-11-25 20:52:30.424955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:22.832 [2024-11-25 20:52:30.424967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:34:22.832 [2024-11-25 20:52:30.424977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:22.832 [2024-11-25 20:52:30.426576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3916.018 ms, result 0 00:34:22.832 [2024-11-25 20:52:30.441145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.832 [2024-11-25 20:52:30.457118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:22.832 [2024-11-25 20:52:30.466038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:22.832 20:52:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.832 20:52:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:22.832 20:52:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:22.832 20:52:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:22.832 20:52:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:23.090 [2024-11-25 20:52:31.125384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.090 [2024-11-25 20:52:31.125444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:23.090 [2024-11-25 20:52:31.125468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:34:23.090 [2024-11-25 20:52:31.125480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.090 [2024-11-25 20:52:31.125507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.090 [2024-11-25 20:52:31.125520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:23.090 [2024-11-25 20:52:31.125531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:23.090 [2024-11-25 20:52:31.125542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.090 [2024-11-25 20:52:31.125563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:23.090 [2024-11-25 20:52:31.125575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:23.090 [2024-11-25 20:52:31.125586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:23.090 [2024-11-25 20:52:31.125602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:23.090 [2024-11-25 20:52:31.125668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.308 ms, result 0 00:34:23.090 true 00:34:23.090 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:23.349 { 00:34:23.349 "name": "ftl", 00:34:23.349 "properties": [ 00:34:23.349 { 00:34:23.349 "name": "superblock_version", 00:34:23.349 "value": 5, 00:34:23.349 "read-only": true 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "name": "base_device", 00:34:23.349 "bands": [ 00:34:23.349 { 00:34:23.349 "id": 0, 00:34:23.349 "state": "CLOSED", 00:34:23.349 "validity": 1.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 1, 00:34:23.349 "state": "CLOSED", 00:34:23.349 "validity": 1.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 2, 00:34:23.349 "state": "CLOSED", 00:34:23.349 "validity": 0.007843137254901933 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 3, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 4, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 5, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 6, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 7, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 8, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 9, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 10, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 11, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 12, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 13, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 14, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 15, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 16, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 17, 00:34:23.349 "state": "FREE", 00:34:23.349 "validity": 0.0 00:34:23.349 } 00:34:23.349 ], 00:34:23.349 "read-only": true 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "name": "cache_device", 00:34:23.349 "type": "bdev", 00:34:23.349 "chunks": [ 00:34:23.349 { 00:34:23.349 "id": 0, 00:34:23.349 "state": "INACTIVE", 00:34:23.349 "utilization": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 1, 00:34:23.349 "state": "OPEN", 00:34:23.349 "utilization": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 2, 00:34:23.349 "state": "OPEN", 00:34:23.349 "utilization": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 3, 00:34:23.349 "state": "FREE", 00:34:23.349 "utilization": 0.0 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "id": 4, 00:34:23.349 "state": "FREE", 00:34:23.349 "utilization": 0.0 00:34:23.349 } 00:34:23.349 ], 00:34:23.349 "read-only": true 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "name": "verbose_mode", 00:34:23.349 "value": true, 00:34:23.349 "unit": "", 00:34:23.349 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:23.349 }, 00:34:23.349 { 00:34:23.349 "name": "prep_upgrade_on_shutdown", 00:34:23.349 "value": false, 00:34:23.349 "unit": "", 00:34:23.349 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:23.349 } 00:34:23.349 ] 00:34:23.349 } 00:34:23.349 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:34:23.349 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:34:23.349 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:23.608 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:34:23.608 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:34:23.608 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:34:23.608 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:34:23.608 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:23.867 Validate MD5 checksum, iteration 1 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:23.867 20:52:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:23.867 [2024-11-25 20:52:31.865016] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:23.867 [2024-11-25 20:52:31.865170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84539 ] 00:34:24.126 [2024-11-25 20:52:32.048983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.126 [2024-11-25 20:52:32.188998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.029  [2024-11-25T20:52:34.424Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-25T20:52:36.329Z] Copying: 1024/1024 [MB] (average 701 MBps) 00:34:28.193 00:34:28.193 20:52:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:28.193 20:52:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=88ca0ddf445409082ced5cc12cc82fac 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 88ca0ddf445409082ced5cc12cc82fac != \8\8\c\a\0\d\d\f\4\4\5\4\0\9\0\8\2\c\e\d\5\c\c\1\2\c\c\8\2\f\a\c ]] 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:30.100 Validate MD5 checksum, iteration 2 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:30.100 20:52:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:30.100 [2024-11-25 20:52:37.858510] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:30.100 [2024-11-25 20:52:37.858660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84600 ] 00:34:30.100 [2024-11-25 20:52:38.043496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.100 [2024-11-25 20:52:38.176606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.006  [2024-11-25T20:52:40.401Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-25T20:52:43.694Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:34:35.558 00:34:35.558 20:52:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:35.558 20:52:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a964cd5e771cccbd97a3f9b51982922b 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a964cd5e771cccbd97a3f9b51982922b != \a\9\6\4\c\d\5\e\7\7\1\c\c\c\b\d\9\7\a\3\f\9\b\5\1\9\8\2\9\2\2\b ]] 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84444 ]] 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84444 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84673 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84673 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84673 ']' 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.940 20:52:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:37.200 [2024-11-25 20:52:45.175167] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:37.200 [2024-11-25 20:52:45.175295] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84673 ] 00:34:37.200 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84444 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:34:37.460 [2024-11-25 20:52:45.359990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.460 [2024-11-25 20:52:45.491692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.843 [2024-11-25 20:52:46.570336] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:38.843 [2024-11-25 20:52:46.570433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:38.843 [2024-11-25 20:52:46.717862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.717906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:38.843 [2024-11-25 20:52:46.717939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:34:38.843 [2024-11-25 20:52:46.717949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.718005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.718018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:38.843 [2024-11-25 20:52:46.718029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:34:38.843 [2024-11-25 20:52:46.718039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.718069] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:38.843 [2024-11-25 20:52:46.718999] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:38.843 [2024-11-25 20:52:46.719029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.719041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:38.843 [2024-11-25 20:52:46.719053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.973 ms 00:34:38.843 [2024-11-25 20:52:46.719063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.719447] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:38.843 [2024-11-25 20:52:46.744671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.744707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:38.843 [2024-11-25 20:52:46.744737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.271 ms 00:34:38.843 [2024-11-25 20:52:46.744749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.758586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.758633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:38.843 [2024-11-25 20:52:46.758646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:34:38.843 [2024-11-25 20:52:46.758656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.759117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.759130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:38.843 [2024-11-25 20:52:46.759141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.381 ms 00:34:38.843 [2024-11-25 20:52:46.759151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.759216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.759229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:38.843 [2024-11-25 20:52:46.759239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:34:38.843 [2024-11-25 20:52:46.759249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.843 [2024-11-25 20:52:46.759274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.843 [2024-11-25 20:52:46.759285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:38.844 [2024-11-25 20:52:46.759295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:38.844 [2024-11-25 20:52:46.759305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.844 [2024-11-25 20:52:46.759344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:38.844 [2024-11-25 20:52:46.763431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.844 [2024-11-25 20:52:46.763457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:38.844 [2024-11-25 20:52:46.763469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.117 ms 00:34:38.844 [2024-11-25 20:52:46.763500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.844 [2024-11-25 20:52:46.763531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.844 [2024-11-25 20:52:46.763542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:38.844 [2024-11-25 20:52:46.763553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:38.844 [2024-11-25 20:52:46.763563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.844 [2024-11-25 20:52:46.763597] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:38.844 [2024-11-25 20:52:46.763621] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:38.844 [2024-11-25 20:52:46.763656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:38.844 [2024-11-25 20:52:46.763680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:38.844 [2024-11-25 20:52:46.763774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:38.844 [2024-11-25 20:52:46.763788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:38.844 [2024-11-25 20:52:46.763801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:38.844 [2024-11-25 20:52:46.763813] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:38.844 [2024-11-25 20:52:46.763825] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:38.844 [2024-11-25 20:52:46.763853] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:38.844 [2024-11-25 20:52:46.763864] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:38.844 [2024-11-25 20:52:46.763874] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:38.844 [2024-11-25 20:52:46.763883] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:38.844 [2024-11-25 20:52:46.763898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.844 [2024-11-25 20:52:46.763908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:38.844 [2024-11-25 20:52:46.763919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:34:38.844 [2024-11-25 20:52:46.763930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.844 [2024-11-25 20:52:46.764003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.844 [2024-11-25 20:52:46.764013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:38.844 [2024-11-25 20:52:46.764024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:34:38.844 [2024-11-25 20:52:46.764034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.844 [2024-11-25 20:52:46.764122] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:38.844 [2024-11-25 20:52:46.764144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:38.844 [2024-11-25 20:52:46.764156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:38.844 [2024-11-25 20:52:46.764191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:38.844 [2024-11-25 20:52:46.764211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:38.844 [2024-11-25 20:52:46.764221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:38.844 [2024-11-25 20:52:46.764230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:38.844 [2024-11-25 20:52:46.764249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:38.844 [2024-11-25 20:52:46.764258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:38.844 [2024-11-25 20:52:46.764277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:38.844 [2024-11-25 20:52:46.764286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:38.844 [2024-11-25 20:52:46.764305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:38.844 [2024-11-25 20:52:46.764315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:38.844 [2024-11-25 20:52:46.764349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:38.844 [2024-11-25 20:52:46.764371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:38.844 [2024-11-25 20:52:46.764390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:38.844 [2024-11-25 20:52:46.764400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:38.844 [2024-11-25 20:52:46.764419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:38.844 [2024-11-25 20:52:46.764429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:38.844 [2024-11-25 20:52:46.764447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:38.844 [2024-11-25 20:52:46.764457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:38.844 [2024-11-25 20:52:46.764476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:38.844 [2024-11-25 20:52:46.764485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:38.844 [2024-11-25 20:52:46.764504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:38.844 [2024-11-25 20:52:46.764535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:38.844 [2024-11-25 20:52:46.764563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:38.844 [2024-11-25 20:52:46.764573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764583] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:38.844 [2024-11-25 20:52:46.764594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:38.844 [2024-11-25 20:52:46.764605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:38.844 [2024-11-25 20:52:46.764625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:38.844 [2024-11-25 20:52:46.764635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:38.844 [2024-11-25 20:52:46.764645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:38.844 [2024-11-25 20:52:46.764654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:38.844 [2024-11-25 20:52:46.764664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:38.844 [2024-11-25 20:52:46.764673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:38.844 [2024-11-25 20:52:46.764684] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:38.844 [2024-11-25 20:52:46.764697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:38.844 [2024-11-25 20:52:46.764719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:38.844 [2024-11-25 20:52:46.764753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:38.844 [2024-11-25 20:52:46.764765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:38.844 [2024-11-25 20:52:46.764776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:38.844 [2024-11-25 20:52:46.764788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:38.844 [2024-11-25 20:52:46.764853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:38.845 [2024-11-25 20:52:46.764865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:38.845 [2024-11-25 20:52:46.764876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:38.845 [2024-11-25 20:52:46.764892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:38.845 [2024-11-25 20:52:46.764903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:38.845 [2024-11-25 20:52:46.764914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:38.845 [2024-11-25 20:52:46.764926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:38.845 [2024-11-25 20:52:46.764937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.764947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:38.845 [2024-11-25 20:52:46.764966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.870 ms 00:34:38.845 [2024-11-25 20:52:46.764977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.808402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.808436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:38.845 [2024-11-25 20:52:46.808466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.441 ms 00:34:38.845 [2024-11-25 20:52:46.808477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.808521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.808531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:38.845 [2024-11-25 20:52:46.808542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:34:38.845 [2024-11-25 20:52:46.808553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.860004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.860057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:38.845 [2024-11-25 20:52:46.860087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.474 ms 00:34:38.845 [2024-11-25 20:52:46.860098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.860138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.860149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:38.845 [2024-11-25 20:52:46.860161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:38.845 [2024-11-25 20:52:46.860177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.860320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.860335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:38.845 [2024-11-25 20:52:46.860358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:34:38.845 [2024-11-25 20:52:46.860368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.860416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.860427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:38.845 [2024-11-25 20:52:46.860454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:34:38.845 [2024-11-25 20:52:46.860465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.884789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.884821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:38.845 [2024-11-25 20:52:46.884834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.336 ms 00:34:38.845 [2024-11-25 20:52:46.884849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.884962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.884977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:34:38.845 [2024-11-25 20:52:46.884988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:38.845 [2024-11-25 20:52:46.884998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.922460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.922494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:34:38.845 [2024-11-25 20:52:46.922509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.502 ms 00:34:38.845 [2024-11-25 20:52:46.922522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:38.845 [2024-11-25 20:52:46.936184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:38.845 [2024-11-25 20:52:46.936223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:38.845 [2024-11-25 20:52:46.936236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.556 ms 00:34:38.845 [2024-11-25 20:52:46.936246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.105 [2024-11-25 20:52:47.028502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.105 [2024-11-25 20:52:47.028574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:39.105 [2024-11-25 20:52:47.028594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.342 ms 00:34:39.105 [2024-11-25 20:52:47.028605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.105 [2024-11-25 20:52:47.028864] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:34:39.105 [2024-11-25 20:52:47.029067] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:34:39.105 [2024-11-25 20:52:47.029246] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:34:39.106 [2024-11-25 20:52:47.029441] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:34:39.106 [2024-11-25 20:52:47.029457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.106 [2024-11-25 20:52:47.029469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:34:39.106 [2024-11-25 20:52:47.029482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.790 ms 00:34:39.106 [2024-11-25 20:52:47.029493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.106 [2024-11-25 20:52:47.029575] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:34:39.106 [2024-11-25 20:52:47.029590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.106 [2024-11-25 20:52:47.029607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:34:39.106 [2024-11-25 20:52:47.029620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:34:39.106 [2024-11-25 20:52:47.029630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.106 [2024-11-25 20:52:47.051527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.106 [2024-11-25 20:52:47.051573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:34:39.106 [2024-11-25 20:52:47.051587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.904 ms 00:34:39.106 [2024-11-25 20:52:47.051599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.106 [2024-11-25 20:52:47.064267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.106 [2024-11-25 20:52:47.064300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:34:39.106 [2024-11-25 20:52:47.064313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:34:39.106 [2024-11-25 20:52:47.064324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.106 [2024-11-25 20:52:47.064463] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:34:39.106 [2024-11-25 20:52:47.064814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.106 [2024-11-25 20:52:47.064826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:39.106 [2024-11-25 20:52:47.064837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:34:39.106 [2024-11-25 20:52:47.064847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.675 [2024-11-25 20:52:47.650739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.675 [2024-11-25 20:52:47.650817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:39.675 [2024-11-25 20:52:47.650838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 585.708 ms 00:34:39.675 [2024-11-25 20:52:47.650851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.675 [2024-11-25 20:52:47.656843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.675 [2024-11-25 20:52:47.656882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:39.675 [2024-11-25 20:52:47.656897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.381 ms 00:34:39.675 [2024-11-25 20:52:47.656917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.675 [2024-11-25 20:52:47.657379] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:34:39.675 [2024-11-25 20:52:47.657408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.675 [2024-11-25 20:52:47.657420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:39.675 [2024-11-25 20:52:47.657433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.459 ms 00:34:39.675 [2024-11-25 20:52:47.657444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.675 [2024-11-25 20:52:47.657476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.675 [2024-11-25 20:52:47.657489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:39.675 [2024-11-25 20:52:47.657501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:39.675 [2024-11-25 20:52:47.657518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.675 [2024-11-25 20:52:47.657558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 594.057 ms, result 0 00:34:39.675 [2024-11-25 20:52:47.657606] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:34:39.675 [2024-11-25 20:52:47.657693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.675 [2024-11-25 20:52:47.657704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:39.675 [2024-11-25 20:52:47.657716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:34:39.675 [2024-11-25 20:52:47.657726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.255590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.243 [2024-11-25 20:52:48.255678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:40.243 [2024-11-25 20:52:48.255722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 597.677 ms 00:34:40.243 [2024-11-25 20:52:48.255734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.261621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.243 [2024-11-25 20:52:48.261661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:40.243 [2024-11-25 20:52:48.261675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.331 ms 00:34:40.243 [2024-11-25 20:52:48.261686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.262341] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:34:40.243 [2024-11-25 20:52:48.262374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.243 [2024-11-25 20:52:48.262385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:40.243 [2024-11-25 20:52:48.262398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.659 ms 00:34:40.243 [2024-11-25 20:52:48.262409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.262455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.243 [2024-11-25 20:52:48.262468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:40.243 [2024-11-25 20:52:48.262479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:40.243 [2024-11-25 20:52:48.262490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.262551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 605.924 ms, result 0 00:34:40.243 [2024-11-25 20:52:48.262601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:40.243 [2024-11-25 20:52:48.262615] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:40.243 [2024-11-25 20:52:48.262629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.243 [2024-11-25 20:52:48.262641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:34:40.243 [2024-11-25 20:52:48.262653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1200.133 ms 00:34:40.243 [2024-11-25 20:52:48.262664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.262699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.243 [2024-11-25 20:52:48.262716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:34:40.243 [2024-11-25 20:52:48.262727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:40.243 [2024-11-25 20:52:48.262738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.243 [2024-11-25 20:52:48.275156] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:40.243 [2024-11-25 20:52:48.275330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.275345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:40.244 [2024-11-25 20:52:48.275368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.594 ms 00:34:40.244 [2024-11-25 20:52:48.275379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.276009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.276033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:34:40.244 [2024-11-25 20:52:48.276051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:34:40.244 [2024-11-25 20:52:48.276061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.278106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.278130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:34:40.244 [2024-11-25 20:52:48.278142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.025 ms 00:34:40.244 [2024-11-25 20:52:48.278153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.278197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.278210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:34:40.244 [2024-11-25 20:52:48.278221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:40.244 [2024-11-25 20:52:48.278238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.278357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.278371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:40.244 [2024-11-25 20:52:48.278382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:34:40.244 [2024-11-25 20:52:48.278392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.278416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.278428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:40.244 [2024-11-25 20:52:48.278438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:40.244 [2024-11-25 20:52:48.278448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.278488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:40.244 [2024-11-25 20:52:48.278501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.278512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:40.244 [2024-11-25 20:52:48.278523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:34:40.244 [2024-11-25 20:52:48.278533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.278589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:40.244 [2024-11-25 20:52:48.278601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:40.244 [2024-11-25 20:52:48.278613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:34:40.244 [2024-11-25 20:52:48.278623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:40.244 [2024-11-25 20:52:48.279798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1563.966 ms, result 0 00:34:40.244 [2024-11-25 20:52:48.292127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.244 [2024-11-25 20:52:48.308098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:40.244 [2024-11-25 20:52:48.318452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:40.244 Validate MD5 checksum, iteration 1 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:40.244 20:52:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:40.503 [2024-11-25 20:52:48.451243] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:40.503 [2024-11-25 20:52:48.451396] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84719 ] 00:34:40.503 [2024-11-25 20:52:48.634704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.762 [2024-11-25 20:52:48.766387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.667  [2024-11-25T20:52:51.062Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-25T20:52:53.697Z] Copying: 1024/1024 [MB] (average 701 MBps) 00:34:45.561 00:34:45.561 20:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:45.561 20:52:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:47.521 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=88ca0ddf445409082ced5cc12cc82fac 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 88ca0ddf445409082ced5cc12cc82fac != \8\8\c\a\0\d\d\f\4\4\5\4\0\9\0\8\2\c\e\d\5\c\c\1\2\c\c\8\2\f\a\c ]] 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:47.522 Validate MD5 checksum, iteration 2 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:47.522 20:52:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:47.522 [2024-11-25 20:52:55.448119] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:47.522 [2024-11-25 20:52:55.448248] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84786 ] 00:34:47.522 [2024-11-25 20:52:55.636074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.781 [2024-11-25 20:52:55.766111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.687  [2024-11-25T20:52:58.082Z] Copying: 697/1024 [MB] (697 MBps) [2024-11-25T20:52:59.459Z] Copying: 1024/1024 [MB] (average 702 MBps) 00:34:51.323 00:34:51.323 20:52:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:51.323 20:52:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:53.227 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a964cd5e771cccbd97a3f9b51982922b 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a964cd5e771cccbd97a3f9b51982922b != \a\9\6\4\c\d\5\e\7\7\1\c\c\c\b\d\9\7\a\3\f\9\b\5\1\9\8\2\9\2\2\b ]] 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84673 ]] 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84673 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84673 ']' 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84673 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84673 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:53.228 killing process with pid 84673 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84673' 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84673 00:34:53.228 20:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84673 00:34:54.609 [2024-11-25 20:53:02.534865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:54.609 [2024-11-25 20:53:02.555876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.555918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:54.609 [2024-11-25 20:53:02.555953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:54.609 [2024-11-25 20:53:02.555965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.555989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:54.609 [2024-11-25 20:53:02.560738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.560766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:54.609 [2024-11-25 20:53:02.560799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.739 ms 00:34:54.609 [2024-11-25 20:53:02.560810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.561014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.561027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:54.609 [2024-11-25 20:53:02.561038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:34:54.609 [2024-11-25 20:53:02.561048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.562226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.562259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:54.609 [2024-11-25 20:53:02.562273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:34:54.609 [2024-11-25 20:53:02.562289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.563224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.563248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:54.609 [2024-11-25 20:53:02.563260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.903 ms 00:34:54.609 [2024-11-25 20:53:02.563271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.577796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.577830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:54.609 [2024-11-25 20:53:02.577843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.509 ms 00:34:54.609 [2024-11-25 20:53:02.577865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.585673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.585703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:54.609 [2024-11-25 20:53:02.585717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.768 ms 00:34:54.609 [2024-11-25 20:53:02.585727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.585824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.585837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:54.609 [2024-11-25 20:53:02.585857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:34:54.609 [2024-11-25 20:53:02.585873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.600133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.600161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:54.609 [2024-11-25 20:53:02.600174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.266 ms 00:34:54.609 [2024-11-25 20:53:02.600183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.614361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.614387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:54.609 [2024-11-25 20:53:02.614399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.167 ms 00:34:54.609 [2024-11-25 20:53:02.614425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.628540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.628567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:54.609 [2024-11-25 20:53:02.628579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.104 ms 00:34:54.609 [2024-11-25 20:53:02.628588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.642589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.609 [2024-11-25 20:53:02.642618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:54.609 [2024-11-25 20:53:02.642630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.956 ms 00:34:54.609 [2024-11-25 20:53:02.642656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.609 [2024-11-25 20:53:02.642690] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:54.609 [2024-11-25 20:53:02.642708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:54.609 [2024-11-25 20:53:02.642721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:54.609 [2024-11-25 20:53:02.642733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:54.609 [2024-11-25 20:53:02.642745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:54.609 [2024-11-25 20:53:02.642874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:54.610 [2024-11-25 20:53:02.642885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:54.610 [2024-11-25 20:53:02.642896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:54.610 [2024-11-25 20:53:02.642909] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:54.610 [2024-11-25 20:53:02.642919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3f02bbe4-bccb-4ea0-8852-d5a1565be431 00:34:54.610 [2024-11-25 20:53:02.642931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:54.610 [2024-11-25 20:53:02.642941] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:54.610 [2024-11-25 20:53:02.642952] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:54.610 [2024-11-25 20:53:02.642962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:54.610 [2024-11-25 20:53:02.642973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:54.610 [2024-11-25 20:53:02.642983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:54.610 [2024-11-25 20:53:02.642994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:54.610 [2024-11-25 20:53:02.643006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:54.610 [2024-11-25 20:53:02.643017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:54.610 [2024-11-25 20:53:02.643027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.610 [2024-11-25 20:53:02.643043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:54.610 [2024-11-25 20:53:02.643054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:34:54.610 [2024-11-25 20:53:02.643065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.610 [2024-11-25 20:53:02.664218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.610 [2024-11-25 20:53:02.664247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:54.610 [2024-11-25 20:53:02.664260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.156 ms 00:34:54.610 [2024-11-25 20:53:02.664272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.610 [2024-11-25 20:53:02.664825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:54.610 [2024-11-25 20:53:02.664837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:54.610 [2024-11-25 20:53:02.664848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:34:54.610 [2024-11-25 20:53:02.664858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.610 [2024-11-25 20:53:02.733781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.610 [2024-11-25 20:53:02.733818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:54.610 [2024-11-25 20:53:02.733832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.610 [2024-11-25 20:53:02.733844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.610 [2024-11-25 20:53:02.733909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.610 [2024-11-25 20:53:02.733921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:54.610 [2024-11-25 20:53:02.733932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.610 [2024-11-25 20:53:02.733943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.610 [2024-11-25 20:53:02.734028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.610 [2024-11-25 20:53:02.734042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:54.610 [2024-11-25 20:53:02.734054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.610 [2024-11-25 20:53:02.734064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.610 [2024-11-25 20:53:02.734089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.610 [2024-11-25 20:53:02.734101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:54.610 [2024-11-25 20:53:02.734112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.610 [2024-11-25 20:53:02.734122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.865032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.865092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:54.870 [2024-11-25 20:53:02.865110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.865121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.967552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.967608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:54.870 [2024-11-25 20:53:02.967625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.967653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.967788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.967801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:54.870 [2024-11-25 20:53:02.967812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.967824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.967881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.967908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:54.870 [2024-11-25 20:53:02.967924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.967935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.968083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.968097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:54.870 [2024-11-25 20:53:02.968108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.968119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.968159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.968172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:54.870 [2024-11-25 20:53:02.968188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.968199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.968247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.968259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:54.870 [2024-11-25 20:53:02.968270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.968281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.968333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:54.870 [2024-11-25 20:53:02.968346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:54.870 [2024-11-25 20:53:02.968378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:54.870 [2024-11-25 20:53:02.968390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:54.870 [2024-11-25 20:53:02.968537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 413.286 ms, result 0 00:34:56.249 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:56.249 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:56.250 Remove shared memory files 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84444 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:56.250 00:34:56.250 real 1m31.063s 00:34:56.250 user 2m3.752s 00:34:56.250 sys 0m23.987s 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.250 20:53:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:56.250 ************************************ 00:34:56.250 END TEST ftl_upgrade_shutdown 00:34:56.250 ************************************ 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@14 -- # killprocess 77090 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@954 -- # '[' -z 77090 ']' 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@958 -- # kill -0 77090 00:34:56.510 Process with pid 77090 is not found 00:34:56.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77090) - No such process 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77090 is not found' 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84920 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:56.510 20:53:04 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84920 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@835 -- # '[' -z 84920 ']' 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.510 20:53:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:56.510 [2024-11-25 20:53:04.518714] Starting SPDK v25.01-pre git sha1 d8f6e798d / DPDK 24.03.0 initialization... 00:34:56.510 [2024-11-25 20:53:04.518866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84920 ] 00:34:56.769 [2024-11-25 20:53:04.704430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.769 [2024-11-25 20:53:04.834486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.707 20:53:05 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.707 20:53:05 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:57.707 20:53:05 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:57.966 nvme0n1 00:34:57.966 20:53:06 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:57.966 20:53:06 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:57.966 20:53:06 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:58.225 20:53:06 ftl -- ftl/common.sh@28 -- # stores=207b76b3-e039-4e89-ae46-3e10378bdc7b 00:34:58.225 20:53:06 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:58.225 20:53:06 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 207b76b3-e039-4e89-ae46-3e10378bdc7b 00:34:58.484 20:53:06 ftl -- ftl/ftl.sh@23 -- # killprocess 84920 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 84920 ']' 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@958 -- # kill -0 84920 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@959 -- # uname 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84920 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:58.484 killing process with pid 84920 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84920' 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@973 -- # kill 84920 00:34:58.484 20:53:06 ftl -- common/autotest_common.sh@978 -- # wait 84920 00:35:01.022 20:53:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:01.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:01.539 Waiting for block devices as requested 00:35:01.539 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:01.798 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:01.798 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:01.798 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:07.090 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:07.090 20:53:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:07.090 20:53:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:07.090 Remove shared memory files 00:35:07.090 20:53:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:07.090 20:53:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:07.090 20:53:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:07.090 20:53:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:07.090 20:53:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:07.090 00:35:07.090 real 11m34.107s 00:35:07.090 user 14m8.795s 00:35:07.090 sys 1m39.247s 00:35:07.090 20:53:15 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.090 ************************************ 00:35:07.090 END TEST ftl 00:35:07.090 ************************************ 00:35:07.090 20:53:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:07.090 20:53:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:07.090 20:53:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:07.090 20:53:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:07.090 20:53:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:07.090 20:53:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:07.090 20:53:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:07.090 20:53:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:07.090 20:53:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:07.090 20:53:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:07.090 20:53:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:07.090 20:53:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.090 20:53:15 -- common/autotest_common.sh@10 -- # set +x 00:35:07.090 20:53:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:07.090 20:53:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:07.090 20:53:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:07.090 20:53:15 -- common/autotest_common.sh@10 -- # set +x 00:35:09.626 INFO: APP EXITING 00:35:09.626 INFO: killing all VMs 00:35:09.626 INFO: killing vhost app 00:35:09.626 INFO: EXIT DONE 00:35:09.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:10.196 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:10.196 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:10.197 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:10.197 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:10.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:11.024 Cleaning 00:35:11.024 Removing: /var/run/dpdk/spdk0/config 00:35:11.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:11.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:11.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:11.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:11.024 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:11.024 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:11.024 Removing: /var/run/dpdk/spdk0 00:35:11.024 Removing: /var/run/dpdk/spdk_pid57715 00:35:11.024 Removing: /var/run/dpdk/spdk_pid57961 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58190 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58294 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58350 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58489 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58507 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58717 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58834 00:35:11.283 Removing: /var/run/dpdk/spdk_pid58941 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59069 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59177 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59222 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59253 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59329 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59459 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59917 00:35:11.283 Removing: /var/run/dpdk/spdk_pid59997 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60071 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60093 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60251 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60268 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60417 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60439 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60508 00:35:11.283 Removing: /var/run/dpdk/spdk_pid60532 00:35:11.284 Removing: /var/run/dpdk/spdk_pid60596 00:35:11.284 Removing: /var/run/dpdk/spdk_pid60614 00:35:11.284 Removing: /var/run/dpdk/spdk_pid60820 00:35:11.284 Removing: /var/run/dpdk/spdk_pid60862 00:35:11.284 Removing: /var/run/dpdk/spdk_pid60946 00:35:11.284 Removing: /var/run/dpdk/spdk_pid61134 00:35:11.284 Removing: /var/run/dpdk/spdk_pid61235 00:35:11.284 Removing: /var/run/dpdk/spdk_pid61282 00:35:11.284 Removing: /var/run/dpdk/spdk_pid61731 00:35:11.284 Removing: /var/run/dpdk/spdk_pid61836 00:35:11.284 Removing: /var/run/dpdk/spdk_pid61949 00:35:11.284 Removing: /var/run/dpdk/spdk_pid62008 00:35:11.284 Removing: /var/run/dpdk/spdk_pid62039 00:35:11.284 Removing: /var/run/dpdk/spdk_pid62123 00:35:11.284 Removing: /var/run/dpdk/spdk_pid62770 00:35:11.284 Removing: /var/run/dpdk/spdk_pid62818 00:35:11.284 Removing: /var/run/dpdk/spdk_pid63309 00:35:11.284 Removing: /var/run/dpdk/spdk_pid63418 00:35:11.284 Removing: /var/run/dpdk/spdk_pid63533 00:35:11.284 Removing: /var/run/dpdk/spdk_pid63586 00:35:11.284 Removing: /var/run/dpdk/spdk_pid63617 00:35:11.284 Removing: /var/run/dpdk/spdk_pid63648 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65545 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65694 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65698 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65716 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65763 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65767 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65779 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65829 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65833 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65845 00:35:11.284 Removing: /var/run/dpdk/spdk_pid65895 00:35:11.542 Removing: /var/run/dpdk/spdk_pid65899 00:35:11.542 Removing: /var/run/dpdk/spdk_pid65911 00:35:11.542 Removing: /var/run/dpdk/spdk_pid67338 00:35:11.542 Removing: /var/run/dpdk/spdk_pid67457 00:35:11.542 Removing: /var/run/dpdk/spdk_pid68894 00:35:11.542 Removing: /var/run/dpdk/spdk_pid70685 00:35:11.542 Removing: /var/run/dpdk/spdk_pid70770 00:35:11.542 Removing: /var/run/dpdk/spdk_pid70856 00:35:11.542 Removing: /var/run/dpdk/spdk_pid70966 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71063 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71169 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71255 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71337 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71447 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71544 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71646 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71737 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71819 00:35:11.542 Removing: /var/run/dpdk/spdk_pid71929 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72026 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72127 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72218 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72304 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72411 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72515 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72613 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72698 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72778 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72862 00:35:11.542 Removing: /var/run/dpdk/spdk_pid72943 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73052 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73143 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73248 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73334 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73414 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73488 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73572 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73682 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73773 00:35:11.542 Removing: /var/run/dpdk/spdk_pid73928 00:35:11.542 Removing: /var/run/dpdk/spdk_pid74230 00:35:11.542 Removing: /var/run/dpdk/spdk_pid74272 00:35:11.542 Removing: /var/run/dpdk/spdk_pid74728 00:35:11.542 Removing: /var/run/dpdk/spdk_pid74921 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75026 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75138 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75198 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75229 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75525 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75603 00:35:11.542 Removing: /var/run/dpdk/spdk_pid75694 00:35:11.542 Removing: /var/run/dpdk/spdk_pid76133 00:35:11.542 Removing: /var/run/dpdk/spdk_pid76285 00:35:11.542 Removing: /var/run/dpdk/spdk_pid77090 00:35:11.542 Removing: /var/run/dpdk/spdk_pid77233 00:35:11.542 Removing: /var/run/dpdk/spdk_pid77453 00:35:11.542 Removing: /var/run/dpdk/spdk_pid77557 00:35:11.542 Removing: /var/run/dpdk/spdk_pid77892 00:35:11.801 Removing: /var/run/dpdk/spdk_pid78162 00:35:11.801 Removing: /var/run/dpdk/spdk_pid78516 00:35:11.801 Removing: /var/run/dpdk/spdk_pid78732 00:35:11.801 Removing: /var/run/dpdk/spdk_pid78870 00:35:11.801 Removing: /var/run/dpdk/spdk_pid78945 00:35:11.801 Removing: /var/run/dpdk/spdk_pid79088 00:35:11.801 Removing: /var/run/dpdk/spdk_pid79129 00:35:11.801 Removing: /var/run/dpdk/spdk_pid79197 00:35:11.801 Removing: /var/run/dpdk/spdk_pid79414 00:35:11.801 Removing: /var/run/dpdk/spdk_pid79656 00:35:11.801 Removing: /var/run/dpdk/spdk_pid80091 00:35:11.801 Removing: /var/run/dpdk/spdk_pid80541 00:35:11.801 Removing: /var/run/dpdk/spdk_pid80988 00:35:11.801 Removing: /var/run/dpdk/spdk_pid81513 00:35:11.801 Removing: /var/run/dpdk/spdk_pid81658 00:35:11.801 Removing: /var/run/dpdk/spdk_pid81751 00:35:11.801 Removing: /var/run/dpdk/spdk_pid82439 00:35:11.801 Removing: /var/run/dpdk/spdk_pid82514 00:35:11.801 Removing: /var/run/dpdk/spdk_pid82978 00:35:11.801 Removing: /var/run/dpdk/spdk_pid83348 00:35:11.801 Removing: /var/run/dpdk/spdk_pid83856 00:35:11.801 Removing: /var/run/dpdk/spdk_pid83985 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84038 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84102 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84159 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84229 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84444 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84539 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84600 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84673 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84719 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84786 00:35:11.801 Removing: /var/run/dpdk/spdk_pid84920 00:35:11.801 Clean 00:35:11.801 20:53:19 -- common/autotest_common.sh@1453 -- # return 0 00:35:11.801 20:53:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:11.801 20:53:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:11.801 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:35:12.060 20:53:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:12.060 20:53:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:12.060 20:53:19 -- common/autotest_common.sh@10 -- # set +x 00:35:12.060 20:53:20 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:12.060 20:53:20 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:12.060 20:53:20 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:12.060 20:53:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:12.060 20:53:20 -- spdk/autotest.sh@398 -- # hostname 00:35:12.060 20:53:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:12.319 geninfo: WARNING: invalid characters removed from testname! 00:35:38.943 20:53:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:39.201 20:53:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:41.729 20:53:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:44.258 20:53:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:46.791 20:53:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:48.693 20:53:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:51.233 20:53:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:51.233 20:53:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:51.233 20:53:59 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:51.233 20:53:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:51.233 20:53:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:51.233 20:53:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:51.233 + [[ -n 5247 ]] 00:35:51.233 + sudo kill 5247 00:35:51.240 [Pipeline] } 00:35:51.252 [Pipeline] // timeout 00:35:51.256 [Pipeline] } 00:35:51.266 [Pipeline] // stage 00:35:51.271 [Pipeline] } 00:35:51.283 [Pipeline] // catchError 00:35:51.291 [Pipeline] stage 00:35:51.292 [Pipeline] { (Stop VM) 00:35:51.304 [Pipeline] sh 00:35:51.584 + vagrant halt 00:35:54.121 ==> default: Halting domain... 00:36:00.705 [Pipeline] sh 00:36:00.987 + vagrant destroy -f 00:36:03.520 ==> default: Removing domain... 00:36:04.099 [Pipeline] sh 00:36:04.380 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:04.389 [Pipeline] } 00:36:04.403 [Pipeline] // stage 00:36:04.412 [Pipeline] } 00:36:04.428 [Pipeline] // dir 00:36:04.433 [Pipeline] } 00:36:04.445 [Pipeline] // wrap 00:36:04.450 [Pipeline] } 00:36:04.462 [Pipeline] // catchError 00:36:04.471 [Pipeline] stage 00:36:04.472 [Pipeline] { (Epilogue) 00:36:04.512 [Pipeline] sh 00:36:04.793 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:10.127 [Pipeline] catchError 00:36:10.129 [Pipeline] { 00:36:10.142 [Pipeline] sh 00:36:10.426 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:10.426 Artifacts sizes are good 00:36:10.435 [Pipeline] } 00:36:10.449 [Pipeline] // catchError 00:36:10.461 [Pipeline] archiveArtifacts 00:36:10.468 Archiving artifacts 00:36:10.582 [Pipeline] cleanWs 00:36:10.593 [WS-CLEANUP] Deleting project workspace... 00:36:10.593 [WS-CLEANUP] Deferred wipeout is used... 00:36:10.600 [WS-CLEANUP] done 00:36:10.602 [Pipeline] } 00:36:10.618 [Pipeline] // stage 00:36:10.623 [Pipeline] } 00:36:10.637 [Pipeline] // node 00:36:10.642 [Pipeline] End of Pipeline 00:36:10.680 Finished: SUCCESS